Search Results

Search found 12166 results on 487 pages for 'invocation api'.

Page 180/487 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • facebook application using iframe on Facebook Developer Toolkit 3.0

    - by adveb
    hey i am trying to build facebook iframe application using the Facebook Developer Toolkit 3.01 asp.net c#. i am working by the ifrmae sample of the toolkit can be download here. www.facebooktoolkit.codeplex.com/releases/view/39727 this is my facebook application that is the same as the iframe sample. http://apps.facebook.com/alefbet/ this is my code, it has 2 pages, master page and default. this 2 pages are the same as the iframe sample. 1) this is the master page. public partial class IFrameMaster : Facebook.Web.CanvasIFrameMasterPage { public IFrameMaster() { RequireLogin = true; } } 2) this is the default.aspx public partial class Default : System.Web.UI.Page { private const string SCRIPT_BLOCK_NAME = "dynamicScript"; protected void Page_Load(object sender, EventArgs e) { if (IsPostBack) { if (Master.Api.Users.HasAppPermission(Enums.ExtendedPermissions.email)) { SendThankYouEmail(); } Response.Redirect("ThankYou.aspx"); } else { if (Master.Api.Users.HasAppPermission(Enums.ExtendedPermissions.email)) { emailPermissionPanel.Visible = false; } CreateScript(); } } private void SendThankYouEmail() { var subject = "Thank you for telling us your favorite color"; var body = "Thank you for telling us what your favorite color is. We hope you have enjoyed using this application. Encourage your friends to tell us their favorite color as well!"; this.Master.Api.Notifications.SendEmail(this.Master.Api.Session.UserId.ToString(), subject, body, string.Empty); } private void CreateScript() { var saveColorScript = @" function saveColor(color) { document.getElementById('" + colorInput.ClientID + @"').value = color; } function submitForm() { document.getElementById('" + form.ClientID + @"').submit(); } "; if (!ClientScript.IsClientScriptBlockRegistered(SCRIPT_BLOCK_NAME)) { ClientScript.RegisterClientScriptBlock(this.GetType(), SCRIPT_BLOCK_NAME, saveColorScript); } } } my directory structure is 1)the master page is in the root. 2)the default.aspx is in the root/alfbet directory. 3)i have also have the xd_receiver.htm inside root/channel directory. that inside the master page their is the folowing line: <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function() { FB.Facebook.init("c81f17ee4d4ffc5113c55f8b99fdcab5", "channel/xd_receiver.htm"); }); </script> the problem is that the applicatin dosent work apps.facebook.com/alefbet/default.aspx why it dosent work ? please help me and others who also obstacle in this issue. i tryied lots of things, one of them was to display the user id. for that i put label in the default.aspx and wrote lblTest.Text = Master.Api.Users.GetInfo().uid.ToString(); and it dosent event get to this line. i know it because it keeps display in the label.text the word "label" thank you very much.

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Strange Play Framework 2.2 exceptions after trying to add MySQL / slick

    - by Mike Cialowicz
    I'm working on a Play 2.2 application, and things have gone a bit south on me since I've tried adding my DB layer. Below are my build.sbt dependencies. As you can see I use mysql-connector-java and play-slick: libraryDependencies ++= Seq( jdbc, anorm, cache, "joda-time" % "joda-time" % "2.3", "mysql" % "mysql-connector-java" % "5.1.26", "com.typesafe.play" %% "play-slick" % "0.5.0.8", "com.aetrion.flickr" % "flickrapi" % "1.1" ) My application.conf has some similarly simple DB stuff in it: db.default.url="jdbc:mysql://localhost/myDb" db.default.driver="com.mysql.jdbc.Driver" db.default.user="root" db.default.pass="" This is what it looks like when my Play server starts: [info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000 (Server started, use Ctrl+D to stop and go back to the console...) [info] Compiling 1 Scala source to C:\bbq\cats\in\space [info] play - database [default] connected at jdbc:mysql://localhost/myDb [info] play - Application started (Dev) So, it appears that Play can connect to the MySQL DB just fine (I think). However, I get this exception when I make any request to my server: [error] p.nettyException - Exception caught in Netty java.lang.NoSuchMethodError: akka.actor.ActorSystem.dispatcher()Lscala/concurren t/ExecutionContext; at play.core.Invoker$.<init>(Invoker.scala:24) ~[play_2.10.jar:2.2.0] at play.core.Invoker$.<clinit>(Invoker.scala) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$Implicits$.defaultContext$lzycompu te(Execution.scala:7) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$Implicits$.defaultContext(Executio n.scala:6) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$.<init>(Execution.scala:10) ~[play _2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$.<clinit>(Execution.scala) ~[play_ 2.10.jar:2.2.0] The odd thing is that the 2nd request (to the exact same URL, same controller, no changes) comes back with a different error: [error] p.nettyException - Exception caught in Netty java.lang.NoClassDefFoundError: Could not initialize class play.api.libs.concurr ent.Execution$ at play.core.server.netty.PlayDefaultUpstreamHandler.handleAction$1(Play DefaultUpstreamHandler.scala:194) ~[play_2.10.jar:2.2.0] at play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(Pla yDefaultUpstreamHandler.scala:169) ~[play_2.10.jar:2.2.0] at com.typesafe.netty.http.pipelining.HttpPipeliningHandler.messageRecei ved(HttpPipeliningHandler.java:62) ~[netty-http-pipelining.jar:na] at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived (HttpContentDecoder.java:108) ~[netty-3.6.5.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:29 6) ~[netty-3.6.5.Final.jar:na] at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessage Received(FrameDecoder.java:459) ~[netty-3.6.5.Final.jar:na] The URL / controller that I'm requesting just renders a static web page and doesn't do anything of any significance. It was working just fine before I started adding my DB layer. I'm rather stuck. Any help would be greatly appreciated, thanks. I'm using Scala 2.10.2, Play 2.2.0, and MySQL Server 5.6.14.0 (community edition).

    Read the article

  • Error accessing a Web Service with SSL

    - by Elie
    I have a program that is supposed to send a file to a web service, which requires an SSL connection. I run the program as follows: SET JAVA_HOME=C:\Program Files\Java\jre1.6.0_07 SET com.ibm.SSL.ConfigURL=ssl.client.props "%JAVA_HOME%\bin\java" -cp ".;Test.jar" ca.mypackage.Main This was works fine, but when I change the first line to SET JAVA_HOME=C:\Program Files\IBM\SDP\runtimes\base_v7\java\jre I get the following error: com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport error: java.net.SocketException: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:119) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:140) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:86) at com.sun.xml.internal.ws.api.pipe.Fiber.__doRun(Fiber.java:593) at com.sun.xml.internal.ws.api.pipe.Fiber._doRun(Fiber.java:552) at com.sun.xml.internal.ws.api.pipe.Fiber.doRun(Fiber.java:537) at com.sun.xml.internal.ws.api.pipe.Fiber.runSync(Fiber.java:434) at com.sun.xml.internal.ws.client.Stub.process(Stub.java:247) at com.sun.xml.internal.ws.client.sei.SEIStub.doProcess(SEIStub.java:132) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:242) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:222) at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:115) at $Proxy26.fileSubmit(Unknown Source) at com.testing.TestingSoapProxy.fileSubmit(TestingSoapProxy.java:81) at ca.mypackage.Main.main(Main.java:63) Caused by: java.net.SocketException: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at javax.net.ssl.DefaultSSLSocketFactory.a(SSLSocketFactory.java:7) at javax.net.ssl.DefaultSSLSocketFactory.createSocket(SSLSocketFactory.java:1) at com.ibm.net.ssl.www2.protocol.https.c.afterConnect(c.java:110) at com.ibm.net.ssl.www2.protocol.https.d.connect(d.java:14) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:902) at com.ibm.net.ssl.www2.protocol.https.b.getOutputStream(b.java:86) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:107) ... 14 more Caused by: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at javax.net.ssl.SSLJsseUtil.b(SSLJsseUtil.java:20) at javax.net.ssl.SSLSocketFactory.getDefault(SSLSocketFactory.java:36) at javax.net.ssl.HttpsURLConnection.getDefaultSSLSocketFactory(HttpsURLConnection.java:16) at javax.net.ssl.HttpsURLConnection.<init>(HttpsURLConnection.java:36) at com.ibm.net.ssl.www2.protocol.https.b.<init>(b.java:1) at com.ibm.net.ssl.www2.protocol.https.Handler.openConnection(Handler.java:11) at java.net.URL.openConnection(URL.java:995) at com.sun.xml.internal.ws.api.EndpointAddress.openConnection(EndpointAddress.java:206) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.createHttpConnection(HttpClientTransport.java:277) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:103) ... 14 more So it seems that this problem would be related to the JRE I'm using, but what doesn't seem to make sense is that the non-IBM JRE works fine, but the IBM JRE does not. Any ideas, or suggestions?

    Read the article

  • Redirect output logs of javax.xml.ws and com.sun.xml.ws

    - by chrisnfoneur
    I am working on a SOAP based web service, with Sun's Metro. I am facing an annoying bug, each time I send a malformed SOAP object to my web service, sun's api spam the System.out with logs like this: javax.xml.ws.WebServiceException: com.sun.istack.XMLStreamException2: org.xml.sax.SAXParseException: cvc-complex-type.4: Attribute 'type' must appear on element 'object'. at com.sun.xml.ws.util.pipe.AbstractSchemaValidationTube.doProcess(AbstractSchemaValidationTube.java:206) at com.sun.xml.ws.util.pipe.AbstractSchemaValidationTube.processRequest(AbstractSchemaValidationTube.java:175) at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:595) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:554) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436) at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:243) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:444) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.server.WSHttpHandler.handleExchange(WSHttpHandler.java:106) at com.sun.xml.ws.transport.http.server.WSHttpHandler.handle(WSHttpHandler.java:91) at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:65) at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:54) at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:68) at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:555) at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:65) at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:527) at sun.net.httpserver.ServerImpl$DefaultExecutor.execute(ServerImpl.java:119) at sun.net.httpserver.ServerImpl$Dispatcher.handle(ServerImpl.java:349) at sun.net.httpserver.ServerImpl$Dispatcher.run(ServerImpl.java:321) at java.lang.Thread.run(Thread.java:619) Caused by: com.sun.istack.XMLStreamException2: org.xml.sax.SAXParseException: cvc-complex-type.4: Attribute 'type' must appear on element 'object'. at com.sun.xml.ws.util.xml.StAXSource$1.parse(StAXSource.java:185) at com.sun.xml.ws.util.xml.StAXSource$1.parse(StAXSource.java:170) at org.apache.xerces.jaxp.validation.ValidatorHandlerImpl.validate(Unknown Source) at org.apache.xerces.jaxp.validation.ValidatorImpl.validate(Unknown Source) at javax.xml.validation.Validator.validate(Validator.java:127) at com.sun.xml.ws.util.pipe.AbstractSchemaValidationTube.doProcess(AbstractSchemaValidationTube.java:204) ... 20 more I would like to switch off this log or redirect it to my error/warn/debug.log files used by log4j. I tried to add a rule in my log4j.xml file : <category name="javax.xml.ws"> <priority value="error" /> </category> It didn't worked. So I tried the following trick: java.util.logging.Logger.getLogger("javax.xml.ws.WebServiceException").setLevel( java.util.logging.Level.OFF); it didn't worked neither. Any ideas ? It is not a big issue but it makes my catalina.log getting bigger & bigger and it's not the appropriate place for this kind of log. Chris

    Read the article

  • How to marshal a COM-Parameter as VT_ARRAY of VT_RECORD

    - by Oliver Japes
    I've already done some extensive search, but I can't seem to find anything matching my problem. The task I'm currently working on is to create a WCF-Wrapper for some DCOM-Objects. This already works great for the most parts, but now I'm stuck with one invocation that expects a VT_ARRAY containing VT_RECORD-Objects. Marshalling as VT_ARRAY is not a problem, but how can I tell COM that the elements in this array are VT_RECORDs? This is the invocation as I current use it. InitTestCase(testCaseName, parameterFileName, testCase, cellInfos.ToArray()); The parameter I'm talking about is the last one. It's defined as List<CellInfo>, CellInfo itself is already attributed with Guid("7D422961-331E-47E2-BC71-7839E9E77D39") and ComVisible(true). It's not a struct but a class. This is the condition failing on the native side: if (VT_RECORD == varCellConfig.vt)... Because of old software using these interfaces, changing the native side is not an option Any idea?

    Read the article

  • What is the need of functional programming?

    - by Lazer
    I have read about functional programming which is stateless, gives the same result invocation after invocation, about closures and other related concepts. I still feel that I have very little idea what these things are about. Thinking about this, right now, I feel complete in C, C++, and Java. Any programming problem and I start thinking in one of these languages. So, I never feel and understand the need for functional languages. A good starting point therefore would be to try to understand some things that are not possible in imperative languages but possible in functional languages. I feel unless I understand where exactly functional languages fit inside my already complete world of C, C++ and Java, I would never be able to appreciate and understand them. So, can somebody help me understand the real need for functional programming? Where exactly do they fit in?

    Read the article

  • Help with OpenSSL request using Python

    - by Ldn
    Hi i'm creating a program that has to make a request and then obtain some info. For doing that the website had done some API that i will use. There is an how-to about these API but every example is made using PHP. But my app is done using Python so i need to convert the code. here is the how-to: The request string is sealed with OpenSSL. The steps for sealing are as follows: • Random 128-bit key is created. • Random key is used to RSA-RC4 symettrically encrypt the request string. • Random key is encrypted with the public key using OpenSSL RSA asymmetrical encryption. • The encrypted request and encrypted key are each base64 encoded and placed in the appropriate fields. In PHP a full request to our API can be accomplished like so: <?php // initial request. $request = array('object' => 'Link', 'action' => 'get', 'args' => array( 'app_id' => 303612602 ) ); // encode the request in JSON $request = json_encode($request); // when you receive your profile, you will be given a public key to seal your request in. $key_pem = "-----BEGIN PUBLIC KEY----- MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALdu5C6d2sA1Lu71NNGBEbLD6DjwhFQO VLdFAJf2rOH63rG/L78lrQjwMLZOeHEHqjaiUwCr8NVTcVrebu6ylIECAwEAAQ== -----END PUBLIC KEY-----"; // load the public key $pkey = openssl_pkey_get_public($key_pem); // seal! $newrequest and $enc_keys are passed by reference. openssl_seal($request, $enc_request, $enc_keys, array($pkey)); // then wrap the request $wrapper = array( 'profile' => 'ProfileName', 'format' => 'RSA_RC4_Sealed', 'enc_key' => base64_encode($enc_keys[0]), 'request' => base64_encode($enc_request) ); // json encode the wrapper. urlencode it as well. $wrapper = urlencode(json_encode($wrapper)); // we can send the request wrapper via the cURL extension $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://api.site.com/'); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, "request=$wrapper"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $data = curl_exec($ch); curl_close($ch); ?> Of all of that, i was able to convert "$request" and i'v also made the JSON encode. This is my code: import urllib import urllib2 import json url = 'http://api.site.com/' array = {'app_id' : "303612602"} values = { "object" : "Link", "action": "get", "args" : array } data = urllib.urlencode(values) json_data = json.dumps(data) What stop me is the sealing with OpenSSL and the publi key (that obviously i have) Using PHP OpenSSL it's so easy, but in Python i don't really know how to use it Please, help me!

    Read the article

  • jQuery .ajax call to bit.ly returns results in IE but not FF or Chrome

    - by Ian Quigley
    I am trying to call to the bit.ly URL shortening service using jQuery with an .ajax call. <html><head> <script type="text/javascript" src="http://www.twipler.com/settings/scripts/jquery.1.4.min.js"></script> <script type="text/javascript"> jQuery.fn.shorten = function(url) { var resultUrl = url; $.ajax( { url: "http://api.bit.ly/shorten?version=2.0.1&login=twipler&apiKey=R_4e618e42fadbb802cf95c6c2dbab3763&longUrl=" + url, async: false, dataType: 'json', data: "", type: "GET", success: function (json) { resultUrl = json.results[url].shortUrl; } }); return resultUrl; } ; </script></head><body> <a href="#" onclick="alert($().shorten('http://amiconnectedtotheinternet.com'));"> Shorten</a> </body> </html> This works in IE8 but does not work in FireFox (3.5.9) nor in Chrome. In both cases 'json' is null. Headers in IE8 GET http://api.bit.ly/shorten?ver..[SNIP]..dtotheinternet.com HTTP/1.1 Accept: application/json, text/javascript, */* Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; Media Center PC 6.0; InfoPath.2; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729) Host: api.bit.ly Connection: Keep-Alive Headers in Chrome GET http://api.bit.ly/shorten?versio..[SNIP]..nectedtotheinternet.com HTTP/1.1 Host: api.bit.ly Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5 Origin: file:// Accept: application/json, text/javascript, */* Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 So the only obvious difference is that Chrome is sending "Origin: file://" and I've no idea how to stop it doing that.

    Read the article

  • May the FileInputStream.available foolish me?

    - by Tom Brito
    This FileInputStream.available() javadoc says: Returns an estimate of the number of remaining bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream. The next invocation might be the same thread or another thread. A single read or skip of this many bytes will not block, but may read or skip fewer bytes. In some cases, a non-blocking read (or skip) may appear to be blocked when it is merely slow, for example when reading large files over slow networks. I'm not sure if in this check: if (new FileInputStream(xmlFile).available() == 0) can I rely that empty files will always return zero?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Intercept Properties With Castle Windsor IInterceptor

    - by jeffn825
    Does anyone have a suggestion on a better way to intercept a properties with Castle DynamicProxy? Specifcally, I need the PropertyInfo that I'm intercepting, but it's not directly on the IInvocation, so what I do is: public static PropertyInfo GetProperty(this MethodInfo method) { bool takesArg = method.GetParameters().Length == 1; bool hasReturn = method.ReturnType != typeof(void); if (takesArg == hasReturn) return null; if (takesArg) { return method.DeclaringType.GetProperties() .Where(prop => prop.GetSetMethod() == method).FirstOrDefault(); } else { return method.DeclaringType.GetProperties() .Where(prop => prop.GetGetMethod() == method).FirstOrDefault(); } } Then in my IInterceptor: #region IInterceptor Members public void Intercept(IInvocation invocation) { bool doSomething = invocation.Method.GetProperty().GetCustomAttributes(true).OfType<SomeAttribute>().Count() > 0; } #endregion Thanks.

    Read the article

  • php , SimpleXML, while loop

    - by Michael
    I'm trying to get some information from ebay api and store it in database . I used simple xml to extract the information but I have a small issue as the information is not displayed for some items . if I make a print to the simple_xml I can see very well that the information is provided by ebay api . I have $items = "220617293997,250645537939,230485306218,110537213815,180519294810"; $number_of_items = count(explode(",", $items)); $xml = $baseClass->getContent("http://open.api.ebay.com/shopping?callname=GetMultipleItems&responseencoding=XML&appid=Morcovar-c74b-47c0-954f-463afb69a4b3&siteid=0&version=525&IncludeSelector=ItemSpecifics&ItemID=$items"); writeDoc($xml, "api.xml"); //echo $xml; $getvalues = simplexml_load_file('api.xml'); // print_r($getvalue); $number = "0"; while($number < 6) { $item_number = $getvalues->Item[$number]->ItemID; $location = $getvalues->Item[$number]->Location; $title = $getvalues->Item[$number]->Title; $price = $getvalues->Item[$number]->ConvertedCurrentPrice; $manufacturer = $getvalues->Item[$number]->ItemSpecifics->NameValueList[3]->Value; $model = $getvalues->Item[$number]->ItemSpecifics->NameValueList[4]->Value; $mileage = $getvalues->Item[$number]->ItemSpecifics->NameValueList[5]->Value; echo "item number = $item_number <br>localtion = $location<br>". "title = $title<br>price = $price<br>manufacturer = $manufacturer". "<br>model = $model<br>mileage = $mileage<br>"; $number++; } the above code returns item number = localtion = title = price = manufacturer = model = mileage = item number = 230485306218 localtion = Coventry, Warwickshire title = 2001 LAND ROVER RANGE ROVER VOGUE AUTO GREEN price = 3635.07 manufacturer = Land Rover model = Range Rover mileage = 76000 item number = 220617293997 localtion = Crawley, West Sussex title = 2004 CITROEN C5 HDI LX RED price = 3115.77 manufacturer = Citroen model = C5 mileage = 76000 item number = 180519294810 localtion = London, London title = 2000 VOLKSWAGEN POLO 1.4 SILVER 16V NEED GEAR BOX price = 905.06 manufacturer = Right-hand drive model = mileage = Standard Car item number = localtion = title = price = manufacturer = model = mileage = As you can see the information is not retrieved for a few items ... If I replace the $number manually like " $item_number = $getvalues-Item[4]-ItemID;" works well for any number .

    Read the article

  • How can I break from a method prematurely that's being called by NSTimer

    - by jammur
    Basically I'm writing a metronome app, but I'm using a sound file that, depending on the BPM, might not be finished playing when the "play" method is called again. For example, if the sound file is 0.5 seconds long, but the BPM is 200, the "play" method needs to be called every 0.3 seconds. I'm not overly familiar with NSTimer, but it appears that if it is supposed to fire before the previous invocation has completed, it doesn't, and just waits for the next time around. I could be completely wrong about that though. What I need to do is have the previous invocation end prematurely, and have the "play" method called again when the time is supposed to fire. Any help would be appreciated!

    Read the article

  • When is a scala partial function not a partial function?

    - by Fred Haslam
    While creating a map of String to partial functions I ran into unexpected behavior. When I create a partial function as a map element it works fine. When I allocate to a val it invokes instead. Trying to invoke the check generates an error. Is this expected? Am I doing something dumb? Comment out the check() to see the invocation. I am using scala 2.7.7 def PartialFunctionProblem() = { def dream()() = { println("~Dream~"); new Exception().printStackTrace() } val map = scala.collection.mutable.HashMap[String,()=>Unit]() map("dream") = dream() // partial function map("dream")() // invokes as expected val check = dream() // unexpected invocation check() // error: check of type Unit does not take parameters }

    Read the article

  • How to define generic super type for static factory method?

    - by Esko
    If this has already been asked, please link and close this one. I'm currently prototyping a design for a simplified API of a certain another API that's a lot more complex (and potentially dangerous) to use. Considering the related somewhat complex object creation I decided to use static factory methods to simplify the API and I currently have the following which works as expected: public class Glue<T> { private List<Type<T>> types; private Glue() { types = new ArrayList<Type<T>>(); } private static class Type<T> { private T value; /* some other properties, omitted for simplicity */ public Type(T value) { this.value = value; } } public static <T> Glue<T> glueFactory(String name, T first, T second) { Glue<T> g = new Glue<T>(); Type<T> firstType = new Glue.Type<T>(first); Type<T> secondType = new Glue.Type<T>(second); g.types.add(firstType); g.types.add(secondType); /* omitted complex stuff */ return g; } } As said, this works as intended. When the API user (=another developer) types Glue<Horse> strongGlue = Glue.glueFactory("2HP", new Horse(), new Horse()); he gets exactly what he wanted. What I'm missing is that how do I enforce that Horse - or whatever is put into the factory method - always implements both Serializable and Comparable? Simply adding them to factory method's signature using <T extends Comparable<T> & Serializable> doesn't necessarily enforce this rule in all cases, only when this simplified API is used. That's why I'd like to add them to the class' definition and then modify the factory method accordingly. PS: No horses (and definitely no ponies!) were harmed in writing of this question.

    Read the article

  • DllImport Based on OS Platform

    - by Ngu Soon Hui
    I have a mixture of unmanaged code ( backend) and managed code ( front end), as such, I would need to call the unmanaged code from my managed code, using interop techniques and DllImport attribute. Now, I've compiled two versions of unmanaged code, for both 32 and 64 bit OS; they are named service32.dll and service64.dll respectively. So, in my .Net code, I would have to do a DllImport for both dlls: [DllImport(@"service32.dll")] //for 32 bit OS invocation public static void SimpleFunction(); [DllImport(@"service64.dll")] //for 64 bit OS invocation public static void SimpleFunction(); And call them depending on which platform my application is running on. The issue now is that for every unmanaged function, I have to declared it twice, one for 32 bit OS and one for 64 bit OS. This is a duplication of work, and everytime I change the signature of an unmanaged function, I have to modified it in two places. Is there anyway that I can change the argument in DllImport so that the correct dll will be invoked automagically, depending on the platform?

    Read the article

  • Downloading large files with AFNetworking

    - by goodfella
    I'm trying to implement downloading of a large file and show to user current progress, but block in: -[AFURLConnectionOperation setDownloadProgressBlock:] returns incorrect bytesRead and totalBytesRead values (they are smaller than they should be). For example: If I have a 90MB file and when it downloads completely, latest block invocation in setDownloadProgressBlock: gives me totalBytesRead value about 30MB. On other side, if file is 2MB large, latest block invocation gives correct totalBytesRead 2MB value. AFNetworking is updated to the latest version from github. If AFNetworking can't do it correctly, what solution can I use? Edit: I've determined that even if file is not downloaded completely (and this happens every time with relatively big file) AFNetworking calls success block in: -[AFHTTPRequestOperation setCompletionBlockWithSuccess:failure] I asked a similar question here about this situation, but didn't get any answers. I can check in code downloaded and real file sizes, but AFNetworking has no API for continuation of partial download.

    Read the article

  • jQuery jqXHR - cancel chained calls, trigger error chain

    - by m0sa
    I am creating a ajax utility for interfacing with my server methods. I would like to leverage jQuery 1.5+ deferred methods from the object returned from the jQuery.ajax() call. The situation is following. The serverside method always returns a JSON object: { success: true|false, data: ... } The client-side utility initiates the ajax call like this var jqxhr = $.ajax({ ... }); And the problem area: jqxhr.success(function(data, textStatus, xhr) { if(!data || !data.success) { ???? // abort processing, trigger error } }); return jqxhr; // return to caller so he can attach his own handlers So the question is how to cancel invocation of all the callers appended success callbacks an trigger his error handler in the place mentioned with ???? ? The documentation says the deferred function invocation lists are FIFO, so my success handler is definitely the first one.

    Read the article

  • PHP-Mcrypt Installation

    - by Infinity
    I need to install php-mcrypt on my CentOS 5.5 VPS, When I try yum install php-mcrypt, it says that it is set to be updated which implies that it is already installed. I looked in the /usr/lib/php/modules and cant find the .so file. Anyway I want to update it but yum is giving the following error, I am running PHP-FPM on Nginx. Last login: Thu Apr 21 12:13:30 2011 from cpc2-seve18-2-0-cust438.13-3.cable.virginmedia.com [root@infinity ~]# yum install php-mcrypt Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Processing Dependency: php >= 5.1.6 for package: php-mcrypt --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php --> Processing Dependency: php-cli = 5.1.6-27.el5_5.3 for package: php ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php ---> Package php-cli.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php-cli ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Finished Dependency Resolution php-mcrypt-5.1.6-15.el5.centos.1.i386 from extras has depsolving problems --> Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) php-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) php-cli-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@infinity ~]# Any ideas?

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • What I don&rsquo;t like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In my last post I wrote about what I like about WIF’s proposed approach to authorization – I also said that I definitely would build upon that infrastructure for my own systems. But implementing such a system is a little harder as it could be. Here’s why (and that’s purely my perspective): First of all WIF’s authorization comes in two “modes” Per-request authorization. When an ASP.NET/WCF request comes in, the registered authorization manager gets called. For SOAP the SOAP action gets passed in. For HTTP requests (ASP.NET, WCF REST) the URL and verb. Imperative authorization This happens when you explicitly call the claims authorization API from within your code. There you have full control over the values for action and resource. In ASP.NET per-request authorization is optional (depends on if you have added the ClaimsAuthorizationHttpModule). In WCF you always get the per-request checks as soon as you register the authorization manager in configuration. I personally prefer the imperative authorization because first of all I don’t believe in URL based authorization. Especially in the times of MVC and routing tables, URLs can be easily changed – but then you also have to adjust your authorization logic every time. Also – you typically need more knowledge than a simple “if user x is allowed to invoke operation x”. One problem I have is, both the per-request calls as well as the standard WIF imperative authorization APIs wrap actions and resources in the same claim type. This makes it hard to distinguish between the two authorization modes in your authorization manager. But you typically need that feature to structure your authorization policy evaluation in a clean way. The second problem (which is somehow related to the first one) is the standard API for interacting with the claims authorization manager. The API comes as an attribute (ClaimsPrincipalPermissionAttribute) as well as a class to use programmatically (ClaimsPrincipalPermission). Both only allow to pass in simple strings (which results in the wrapping with standard claim types mentioned earlier). Both throw a SecurityException when the check fails. The attribute is a code access permission attribute (like PrincipalPermission). That means it will always be invoked regardless how you call the code. This may be exactly what you want, or not. In a unit testing situation (like an MVC controller) you typically want to test the logic in the function – not the security check. The good news is, the WIF API is flexible enough that you can build your own infrastructure around their core. For my own projects I implemented the following extensions: A way to invoke the registered claims authorization manager with more overloads, e.g. with different claim types or a complete AuthorizationContext. A new CAS attribute (with the same calling semantics as the built-in one) with custom claim types. A MVC authorization attribute with custom claim types. A way to use branching – as opposed to catching a SecurityException. I will post the code for these various extensions here – so stay tuned.

    Read the article

  • Stack Exchange Notifier Chrome Extension [v1.2.9.3 released]

    - by Vladislav Tserman
    About Stack Exchange Notifier is a handy extension for Google Chrome browser that displays your current reputation, badges on Stack Exchange sites and notifies you on reputation's changes. You will now get notified of comments on your own posts (questions and answers) and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). All StackExchange sites are supported. Screenshots Access Install extensions from Google Chrome Extension Gallery Platform Google Chrome browser extension Contact Created by me (Vladislav Tserman). I'm available at: vladjan (at) gmail.com Follow Stack Exchange Notifier on twitter to get notified about news and updates: http://twitter.com/se_notifier Code Written in Java, Google Web Toolkit under Eclipse Helios. Stack Exchange Notifier uses the Stack Exchange API and is powered by Google App Engine for Java. Changelog I will be porting extension to not use app engine back-end due to some limitations. New versions of the extension will be making direct calls to Stack Exchange API right from your browser. Please do not expect new versions of the extension any time soon. Sorry. Read more about limitations here http://stackapps.com/questions/1713 and here http://stackoverflow.com/questions/3949815 Currently, you may sometimes experience some issues using extension, but most users will have no problems. You may notice too many errors in the logs, but there is nothing I can do with this now. Thanks for using my little app, thanks to all of you it still works in spite of many issues with API Version 1.2.9.3 - Thursday, October 14, 2010 - Bug fix release (back-end improvements) Version 1.2.9.2 - Thursday, October 07, 2010 - Bug fix release (high rate of occasional API errors were noticed so some fixes added to handle them were possible) Version 1.2.9.1 - Tuesday, October 05, 2010 - Mostly bug fix release, back-end performance improvements - You will now get notified of comments on your own posts (questions and answers) that are not older than 1 year and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). This is experimental feature, let me know if you like/need it. - New 'All sites' view displays all websites from Stack Exchange network (part of new feature that is not finished yet) Version 1.2.9 - Saturday, September 25, 2010 - Fixes an issue when some users got empty Account view. - When hovering on @Username on account view the title now displays '@Username on @SiteName' to easily understand the site name Version 1.2.7 - Wednesday, September 22, 2010 - Fixed an issue with notifications. - Minor improvements Version 1.2.5 - Tuesday, September 21, 2010 - Fixed an issue where some characters in response payload raised an exception when parsing to JSON. v1.2.3 (Sunday, September 19, 2010) - Support for new OpenID providers was added (Yahoo, MyOpenID, AOL) - UI improvements - Several minor defects were fixed v1.2.2 (Thursday, September 16, 2010) - New types of notifications added. Now extension notifies you on comments that are directed to you. Comments are expandable, so clicking on comment title will expand height to accommodate all available text. - UI and error handling improvements Future Application still in beta stage. I hope you're not having any problems, but if you are, please let me know. Leave your feedback and bug reports in comments. I'm available at: vladjan (at) gmail.com. I'm working on adding new features. I want to hear from the users and incorporate as much feedback as possible into the extension. Any suggestions for improvements/features to add?

    Read the article

  • Installing Exchange 2013 CU1

    - by marc dekeyser
    Originally posted on: http://geekswithblogs.net/marcde/archive/2013/08/01/installing-exchange-2013-cu1.aspxBefore you begin Download the following software: · UCMA 4.0: http://www.microsoft.com/en-us/download/details.aspx?id=34992 · Office 2010 filter packs 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=17062 · Office 2010 filter packs SP1 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=26604 Prerequisite installation Step 1 : Open Windows Powershell     Step 2: Enter following string to start prerequisite installation for a multirole server – Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation   Step 3: restart the server   Shutdown.exe /r /t 60     Step 4: Install the UCMA Runtime Setup Navigate to the folder holding the prerequisite downloads and double click the “UCMARunTimeSetup”     Step 5: Accept the Run prompt     Step 6: Click the left click on "Next (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 7: Left click on "I have read and accept the license terms. (check box)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 8: Left click on "Install (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 9: Left click on "Finish (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 10: Start the Office 2010 filter pack installation     Step 11: Left click on "Run (button)" in "Open File - Security Warning"     Step 12: Left click on "Microsoft Filter Pack 2.0 (button)" as it hides in the background by default.     Step 13: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 14: Left click on "I accept the terms in the License Agreement (check box)" in "Microsoft Filter Pack 2.0"     Step 15: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 16: Left click on "OK (button)" in "Microsoft Filter Pack 2.0"     Step 17: Start the installation of the Office 2010 Filterpack SP1.     Step 18: Left click on "Run (button)" in "Open File - Security Warning"     Step 19: Left click on "Click here to accept the Microsoft Software License Terms. (check box)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 20: Left click on "Continue (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 21: (?21/?06/?2013 11:23:25) User left click on "OK (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 22: Left click on "Windows PowerShell (button)"     Step 23: restart the server. Shutdown.exe /r /t 60   Step 24: Left click on "Close (button)" in "You're about to be signed off"     Installing Exchange server 2013 Step 1: Navigate to the Exchange 2013 CU1 extracted location and run setup.exe Left click on "next (button)" in "Exchange Server Setup" Step 2: Left click on "next (button)" in "Exchange Server Setup" Step 3: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" Step 4: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" a Step 5: User left click on "next (button)" in "Exchange Server Setup" Step 6: Left click on "I accept the terms in the license agreement" in "Exchange Server Setup" Step 7: Left click on "next (button)" in "Exchange Server Setup" Step 8: Left click on "next (button)" in "Exchange Server Setup" Step 9: Select "Mailbox role” in "Exchange Server Setup" Step 10: Select "Client Access role" in "Exchange Server Setup" Step 11: Left click on "next (button)" in "Exchange Server Setup" Step 12: Left click on "next (button)" in "Exchange Server Setup" Step 13: Choose the installation path and left click on "next (button)" in "Exchange Server Setup" Step 14: Leave malware scanning on by making sure the radio button is on “No”and left click on "Exchange Server Setup (window)" in "Exchange Server Setup"                   Step 15: Left click on "finish (button)" in "Exchange Server Setup" Step 16: Restart the server. Shutdown.exe /r /t 60

    Read the article

  • Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark (TOTD #184)

    - by arungupta
    TOTD #183 explained how to build a WebSocket-driven application using GlassFish 4. This Tip Of The Day (TOTD) will explain how do view/debug on-the-wire messages, or frames as they are called in WebSocket parlance, over this upgraded connection. This blog will use the application built in TOTD #183. First of all, make sure you are using a browser that supports WebSocket. If you recall from TOTD #183 then WebSocket is combination of Protocol and JavaScript API. A browser supporting WebSocket, or not, means they understand your web pages with the WebSocket JavaScript. caniuse.com/websockets provide a current status of WebSocket support in different browsers. Most of the major browsers such as Chrome, Firefox, Safari already support WebSocket for the past few versions. As of this writing, IE still does not support WebSocket however its planned for a future release. Viewing WebSocket farmes require special settings because all the communication happens over an upgraded HTTP connection over a single TCP connection. If you are building your application using Java, then there are two common ways to debug WebSocket messages today. Other language libraries provide different mechanisms to log the messages. Lets get started! Chrome Developer Tools provide information about the initial handshake only. This can be viewed in the Network tab and selecting the endpoint hosting the WebSocket endpoint. You can also click on "WebSockets" on the bottom-right to show only the WebSocket endpoints. Click on "Frames" in the right panel to view the actual frames being exchanged between the client and server. The frames are not refreshed when new messages are sent or received. You need to refresh the panel by clicking on the endpoint again. To see more detailed information about the WebSocket frames, you need to type "chrome://net-internals" in a new tab. Click on "Sockets" in the left navigation bar and then on "View live sockets" to see the page. Select the box with the address to your WebSocket endpoint and see some basic information about connection and bytes exchanged between the client and the endpoint. Clicking on the blue text "source dependency ..." shows more details about the handshake. If you are interested in viewing the exact payload of WebSocket messages then you need a network sniffer. These tools are used to snoop network traffic and provide a lot more details about the raw messages exchanged over the network. However because they provide lot more information so they need to be configured in order to view the relevant information. Wireshark (nee Ethereal) is a pretty standard tool for sniffing network traffic and will be used here. For this blog purpose, we'll assume that the WebSocket endpoint is hosted on the local machine. These tools do allow to sniff traffic across the network though. Wireshark is quite a comprehensive tool and we'll capture traffic on the loopback address. Start wireshark, select "loopback" and click on "Start". By default, all traffic information on the loopback address is displayed. That includes tons of TCP protocol messages, applications running on your local machines (like GlassFish or Dropbox on mine), and many others. Specify "http" as the filter in the top-left. Invoke the application built in TOTD #183 and click on "Say Hello" button once. The output in wireshark looks like Here is a description of the messages exchanged: Message #4: Initial HTTP request of the JSP page Message #6: Response returning the JSP page Message #16: HTTP Upgrade request Message #18: Upgrade request accepted Message #20: Request favicon Message #22: Responding with favicon not found Message #24: Browser making a WebSocket request to the endpoint Message #26: WebSocket endpoint responding back You can also use Fiddler to debug your WebSocket messages. How are you viewing your WebSocket messages ? Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >