Search Results

Search found 762 results on 31 pages for 'rhino commons'.

Page 27/31 | < Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >

  • Post MIX10 Decompression

    - by Dave Campbell
    With a big dose of reality, I walked into this place this morning and found out "yeah, I really do write .NET web apps and MS Access for a living" :( ... but it pays the bills and I've gotten *way* used to eating 3 times a day :) MIX10 was great, although the buzz didn't seem as big as MIX09, and I'm not sure why. It also seemed like a different crowd and other folks I talked to agreed with that. Of course now I can outwardly admit that the "Windows Phone 7 Series" is programmed with Silverlight ... how cool is that? I've been biting my tongue about that info for over a month! I cloistered myself in Ballroom A for the week, not counting the Keynotes. That's where the phone sessions were located. I tried to collect the full set, but ended up bailing on the last one because it was ending at the time that MIX10 was ending, and I hadn't spent a whole lot of time in 'The Commons'. I met a bunch of folks I've blogged about, or exchanged email with, and that's always fun. Renewed associations with folks I only see once or twice a year and way too long a list and don't want to mention some and leave off others... I did have an opportunity to meet Charles Petzold... wow that was interesting... I got into Windows development through Charles' Programming Windows 3.1 book 'back in the day' ... couldn't find anyone at Honeywell wanted to join my journey, so it was just me and 'Chuck' :) ... read every word of that book more than once... all marked up, tags sticking out of it. And now he's writing a WP7 book ... gotta get it: Free ebook: Programming Windows Phone 7 Series (DRAFT Preview) I went through my Big List-o-BlogsTM last night and it took over 2 hours because of all the new content since MIX10. I've got 90 posts tagged as of 9PM on 3/21. If everybody stopped right now, it would take me 9 days to push what I have now, so you'll have to be patient! I had another event on Thursday that was *extremely* tiring, so I ended up staying over another night. I drove back into the strip on Friday morning to try to find a non-cheesy souvenir for my wife, and didn't find much. Then I went to Blueberry Hill restaurant for 3 eggs, 3 strips of bacon, and 3 awesome potato pancakes. Check them out if you have time! And then hit the road. In case anyone is wondering, the 2-1/2 hour drive I took across Hoover Dam on Sunday afternoon only took 30 minutes on Friday afternoon... that was a more normal trip! I thoroughly enjoyed the time I spent with everyone. Thanks to John Papa and his crew for the great Insider's party on Monday night... the Blues Brothers were a fun surprise and they did a good job! And the swag was great... thanks to all the contributors for a fun evening at their expense! All I can say is stay tuned, go to live.visitmix.com/videos and watch everything, get the phone tools, start working... everything's different and everything's fun... jump in, it's all Silverlight! Stay in the 'Light! Technorati Tags: Silverlight    Silverlight 4    Windows Phone     MIX10

    Read the article

  • Where's My Windows Azure Subscriptions

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/03/wheres-my-windows-azure-subscriptions.aspxYesterday when I opened Windows Azure manage portal I found some resources were missed. I checked the website for those missed cloud service and they are still live. Then I checked my billing history but didn't found any problem. When I back to the portal I found that all of those resource are under my MSDN subscription. So I remembered that if this is related with the recently Windows Azure platform update.   This feature named "Enterprise Management", which provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution. By default, all existing windows azure account would have a default Windows Azure Active Directory (a.k.a. WAAD) associated. In the address bar I can find the default login WAAD of my account, which is "microsoft.onmicrosoft.com". To change the WAAD we can click "subscriptions" on top of the manage portal, select the active directory from the list of "filter by directory" and select the subscription we want to see, then press "apply". As you can see, the subscription under my MSDN was located in a WAAD named "beijingtelecom.onmicrosoft.com". This is because when Microsoft applied this feature, they will check if you have an existing WAAD in your subscription. If not, it will create a new one, otherwise it will use your WAAD and move your subscription into this directory. Since I created a WAAD for test several months ago, this subscription was moved to this directory.   To change the subscription's directory is simple. First we need to create a new WAAD with the name we preferred. As below I created a new directory named "shaunxu". Then select "settings" from the left navigation bar, select the subscription we wanted to change and click "edit directory". You don't have the permission to edit/change directory unless your Microsoft Account is the service administrator of this subscription. Then in the popup window, select the WAAD you want to change and press "next". All done. You need to log off and log in the portal then your subscription will be in the directory you wanted. And after these steps I can view my resources in this subscription.   Summary In this post I described how to change subscriptions into a new directory. With this new feature we can manage our Windows Azure subscription more flexible. But there are something we need keep in mind. 1. Only the service administrator could be able to move subscription. 2. Currently there's no way for us to see our Windows Azure services in more than one directory at the same time. Like me, I can see my services under "shaunxu.onmicrosoft.com" and I must change the filter directory from the "subscriptions" menu to see other services under "microsoft.onmicrosoft.com". 3. Currently we cannot delete an existing WAAD.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • Jetty interprets JETTY_ARGS as file name

    - by Lena Schimmel
    I'm running Jetty (version "null 6.1.22") on Ubuntu 10.04. It's running fine until I need JSP support. According to several blog posts I need to set the JETTY_ARGS to OPTIONS=Server,jsp. However, if I put this into /etc/default/jetty: JETTY_ARGS=OPTIONS=Server,jsp and restart Jetty via /etc/init.d/jetty stop && /etc/init.d/jetty start, it reports success, but does not accept connections. I notices that it logs something to /usr/share/jetty/logs/out.log: 2012-09-11 11:19:05.110:WARN::EXCEPTION java.io.FileNotFoundException: /var/cache/jetty/tmp/OPTIONS=Server,jsp (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:137) at java.io.FileInputStream.<init>(FileInputStream.java:96) at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:87) at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:178) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:630) at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:189) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:776) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1208) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:525) at javax.xml.parsers.SAXParser.parse(SAXParser.java:392) at org.mortbay.xml.XmlParser.parse(XmlParser.java:188) at org.mortbay.xml.XmlParser.parse(XmlParser.java:204) at org.mortbay.xml.XmlConfiguration.<init>(XmlConfiguration.java:109) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:969) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177) That is, whatever I put into JETTY_ARGS, it inteprets is as a filename inside /var/cache/jetty/tmp/ and tries to parse that file as XML (or does it parse some other XML and tries to read that file as a DTD? I'm not sure.). This doesn't seem to make any sense to me, especially since that directory is entirely empty. I've verified this with several other Strings, not only OPTIONS=Server,jsp.

    Read the article

  • Maven artifacts could not be resolved

    - by Adam Fisher
    I added the spring and jboss repositories to my pom.xml like below: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <name>MyProject</name> <url>http://www.myproject.com</url> <modelVersion>4.0.0</modelVersion> <groupId>com.myproject</groupId> <artifactId>myproject</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <dependencies> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-api</artifactId> <version>2.1.3-b02</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-impl</artifactId> <version>2.1.3_01</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.0.2</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>3.0-alpha-1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>taglibs</groupId> <artifactId>standard</artifactId> <version>1.1.2</version> <scope>runtime</scope> </dependency> <!-- SPRING DEPENDENCIES --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>3.0.6.RELEASE</version> </dependency> <!-- HIBERNATE DEPENDENCIES --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>3.5.4-Final</version> </dependency> <!-- PRIMEFACES --> <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.0.M4</version> </dependency> <dependency> <groupId>org.primefaces.themes</groupId> <artifactId>aristo</artifactId> <version>1.0.1</version> </dependency> <!-- OTHER DEPENDENCIES --> <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.5</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.18</version> <scope>provided</scope> </dependency> <dependency> <groupId>net.authorize</groupId> <artifactId>java-anet-sdk</artifactId> <version>1.4.2</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk</artifactId> <version>1.2.12</version> </dependency> <dependency> <groupId>com.ocpsoft</groupId> <artifactId>prettyfaces-jsf2</artifactId> <version>3.3.2</version> </dependency> <dependency> <groupId>javax</groupId> <artifactId>javaee-web-api</artifactId> <version>6.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.1</version> <scope>test</scope> </dependency> </dependencies> <properties> <endorsed.dir>${project.build.directory}/endorsed</endorsed.dir> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <netbeans.hint.j2eeVersion>1.6</netbeans.hint.j2eeVersion> <netbeans.hint.deploy.server>gfv3ee6</netbeans.hint.deploy.server> </properties> <repositories> <repository> <id>jsf20</id> <name>Repository for library Library[jsf20]</name> <url>http://download.java.net/maven/2/</url> <layout>default</layout> </repository> <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> <layout>default</layout> </repository> <repository> <id>jboss-public-repository-group</id> <name>JBoss Public Maven Repository Group</name> <url>https://repository.jboss.org/nexus/content/repositories/releases/</url> <layout>default</layout> </repository> <repository> <id>spring-release</id> <name>Spring Release Repository</name> <url>http://maven.springframework.org/release</url> <layout>default</layout> </repository> </repositories> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> <compilerArguments> <endorseddirs>${endorsed.dir}</endorseddirs> </compilerArguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.1</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.1</version> <executions> <execution> <phase>validate</phase> <goals> <goal>copy</goal> </goals> <configuration> <outputDirectory>${endorsed.dir}</outputDirectory> <silent>true</silent> <artifactItems> <artifactItem> <groupId>javax</groupId> <artifactId>javaee-endorsed-api</artifactId> <version>6.0</version> <type>jar</type> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> </plugins> <finalName>${project.artifactId}</finalName> </build> <!--pluginRepositories> <pluginRepository> <id>caucho</id> <name>Caucho</name> <url>http://caucho.com/m2</url> </pluginRepository> </pluginRepositories--> </project> But when I build, I get an error: The following artifacts could not be resolved: org.springframework:spring:jar:3.0.6.RELEASE, org.hibernate:hibernate:jar:3.5.4-Final: Could not find artifact org.springframework:spring:jar:3.0.6.RELEASE in jsf20 (http://download.java.net/maven/2/) -> [Help 1] It's like maven only looks at the first repository and not the ones defined for spring and hibernate.

    Read the article

  • Commercial Drupal Modules & Themes

    - by Ravish
    A discussion at Drupal.org forums prompted me to give my input about commercial ecosystem around Open Source Content Management Systems. WordPress and Joomla have been growing rapidly since past few years. But, growth rate of Drupal seems to be almost flat. Despite being the most powerful CMS around, Drupal is still not being adopted by masses. Many people will argue that Drupal is not targeted towards masses, but developers. I agree, Drupal is more of a development platform than a consumer CMS. Drupal is ‘many things to many people’, and I can build almost any type of website with it. Drupal is being used for building blogs, corporate websites, Intranet portals, social networking and even a project management system. Looking at the wide array of Drupal implementations, it deserves to be the most widely adopted CMS. I believe there are few challenges that Drupal community needs to overcome. To understand these challenges, I surveyed some webmasters who use Joomla or WordPress but not Drupal. I asked them why they don’t want to use Drupal, following are the responses I got from them: Drupal is too complicated, takes time to learn. Drupal is great, but its admin panel is overwhelming. I couldn’t find any nice themes for Drupal. There is no WYSIWYG editor in Drupal. Most Drupal modules do not work out of the box. There aren’t enough modules like Ubercart which provides any out of the box functionality. I tried modules like CCK, Views and Panels. After wasting several hours struggling with them, I decided to give up on Drupal. I don’t use Drupal because of pushbutton and Garland theme. I had hard time trying to customize Garland and it messed up the whole layout. There are no premium modules and themes for Drupal. Joomla has tons of awesome themes and modules. I don’t want a million hacks like CCK, Views, Tokens, Pathauto, ImageCache and CTools just to run a simple website. Most of the complaints from users are related to the learning and development curve involved with Drupal, and the lack of ecosystem. While most of the problems will be gone in Drupal 7, ecosystem is something that needs to be built by the Drupal community. Drupal distributions are a great step forward. There are few awesome Drupal distributions available like Open Publish, Open Atrium and Drupal Commons. I predict, there will be a wave of many powerful Drupal distributions after Drupal 7 release. Many of them will be user-friendly and commercial supported. Following is my post at Drupal.org forums: Quote from: http://drupal.org/node/863776#comment-3313836 Brian Gardner (StudioPress) and Woo Themes launched premium WordPress themes in 2007, the developer community did not accept it at first. Moreover, they were not even GPL licensed. There was an outcry in WordPress community against them. Following that, most premium theme providers switched to GPL licensing. Despite controversies, users voted for premium theme and plugins by buying them. Inspired by their success, hundreds of other developers started to sell premium themes and plugins. It is now the acceptable and in fact most popular business model among WordPress community. Matt Mullenweg once told me, they would not support premium themes. If he supported, developers would no more give out free GPL themes & plugins. He pointed me towards Joomla, there were hardly any nice free themes & modules available. Now two years forward, premium products are not just accepted but embraced by the WordPress community – http://wordpress.org/extend/themes/commercial/ The quality and number of themes & modules has increased, even the free ones. This also helped to boost the adoption and ecosystem of WordPress. Today, state of Drupal is like WordPress was in 2007. There are hardly any out of the box solutions available for Drupal. Ubercart, Open Publish and Open Atrium are the only ones I can think of. Many of the popular Drupal modules are patches and hole-fillers. Thankfully, these hole-filler modules are going to be in Drupal 7 core. Drupal 7 and distributions will spawn a new array of solutions built upon Drupal. Soon, we will have more like Ubercarts and Open Atriums. If commercial solutions can help fuel this ecosystem and growth, Drupal community will accept them eventually. This debate will not stop your customers from buying your product. If your product is awesome, they will vote for you by buying your product.

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Html.RenderAction Failed when Validation Failed

    - by Shaun
    RenderAction method had been introduced when ASP.NET MVC 1.0 released in its MvcFuture assembly and then final announced along with the ASP.NET MVC 2.0. Similar as RenderPartial, the RenderAction can display some HTML markups which defined in a partial view in any parent views. But the RenderAction gives us the ability to populate the data from an action which may different from the action which populating the main view. For example, in Home/Index.aspx we can invoke the Html.RenderPartial(“MyPartialView”) but the data of MyPartialView must be populated by the Index action of the Home controller. If we need the MyPartialView to be shown in Product/Create.aspx we have to copy (or invoke) the relevant code from the Index action in Home controller to the Create action in the Product controller which is painful. But if we are using Html.RenderAction we can tell the ASP.NET MVC from which action/controller the data should be populated. in that way in the Home/Index.aspx and Product/Create.aspx views we just need to call Html.RenderAction(“CreateMyPartialView”, “MyPartialView”) so it will invoke the CreateMyPartialView action in MyPartialView controller regardless from which main view. But in my current project we found a bug when I implement a RenderAction method in the master page to show something that need to connect to the backend data center when the validation logic was failed on some pages. I created a sample application below.   Demo application I created an ASP.NET MVC 2 application and here I need to display the current date and time on the master page. I created an action in the Home controller named TimeSlot and stored the current date into ViewDate. This method was marked as HttpGet as it just retrieves some data instead of changing anything. 1: [HttpGet] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } Next, I created a partial view under the Shared folder to display the date and time string. 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<dynamic>" %> 2:  3: <span>Now: <% 1: : ViewData["timeslot"].ToString() %></span> Then at the master page I used Html.RenderAction to display it in front of the logon link. 1: <div id="logindisplay"> 2: <% 1: Html.RenderAction("TimeSlot", "Home"); %> 3:  4: <% 1: Html.RenderPartial("LogOnUserControl"); %> 5: </div> It’s fairly simple and works well when I navigated to any pages. But when I moved to the logon page and click the LogOn button without input anything in username and password the validation failed and my website crashed with the beautiful yellow page. (I really like its color style and fonts…)   How ASP.NET MVC executes Html.RenderAction In this example all other pages were rendered successful which means the ASP.NET MVC found the TimeSolt action under the Home controller except this situation. The only different is that when I clicked the LogOn button the browser send an HttpPost request to the server. Is that the reason of this bug? I created another action in Home controller with the same action name but for HttpPost. 1: [HttpPost] 2: [ActionName("TimeSlot")] 3: public ActionResult TimeSlot(object dummy) 4: { 5: return TimeSlot(); 6: } Or, I can use the AcceptVerbsAttribute on the TimeSlot action to let it allow both HttpGet and HttpPost. 1: [AcceptVerbs("GET", "POST")] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } And then repeat what I did before and this time it worked well. Why we need the action for HttpPost here as it’s just data retrieving? That is because of how ASP.NET MVC executes the RenderAction method. In the source code of ASP.NET MVC we can see when proforming the RenderAction ASP.NET MVC creates a RequestContext instance from the current RequestContext and created a ChildActionMvcHandler instance which inherits from MvcHandler class. Then the ASP.NET MVC processes the handler through the HttpContext.Server.Execute method. That means it performs the action as a stand-alone request asynchronously and flush the result into the  TextWriter which is being used to render the current page. Since when I clicked the LogOn the request was in HttpPost so when ASP.NET MVC processed the ChildActionMvcHandler it would find the action which allow the current request method, which is HttpPost. Then our TimeSlot method in HttpGet would not be matched.   Summary In this post I introduced a bug in my currently developing project regards the new Html.RenderAction method provided within ASP.NET MVC 2 when processing a HttpPost request. In ASP.NET MVC world the underlying Http information became more important than in ASP.NET WebForm world. We need to pay more attention on which kind of request it currently created and how ASP.NET MVC processes.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager

    - by BilalFromParis
    when I add the code into my spring configuration file beans-hibernate.xml <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> It doesn't work and I don't know why, can someone help me please ? My Dao Class is : public class CourseDaoImpl implements CourseDao { private SessionFactory sessionFactory; public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } @Transactional public void store(Course course) { sessionFactory.getCurrentSession().saveOrUpdate(course); } @Transactional public void delete(Long courseId) { Course course = (Course)sessionFactory.getCurrentSession().get(Course.class, courseId); sessionFactory.getCurrentSession().delete(course); } @Transactional(readOnly=true) public Course findById(Long courseId) { return (Course)sessionFactory.getCurrentSession().get(Course.class, courseId); } @Transactional public List<Course> findAll() { Query query = sessionFactory.getCurrentSession().createQuery("FROM Course"); return (List<Course>)query.list(); } } but : juil. 04, 2012 3:38:18 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh Infos: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@6ba8fb1b: startup date [Wed Jul 04 03:38:18 CEST 2012]; root of context hierarchy juil. 04, 2012 3:38:18 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Infos: Loading XML bean definitions from class path resource [beans-hibernate.xml] juil. 04, 2012 3:38:19 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons Infos: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5a7fed46: defining beans [org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,sessionFactory,transactionManager,courseDao]; root of factory hierarchy juil. 04, 2012 3:38:19 AM org.hibernate.annotations.common.Version INFO: HCANN000001: Hibernate Commons Annotations {4.0.1.Final} juil. 04, 2012 3:38:19 AM org.hibernate.Version logVersion INFO: HHH000412: Hibernate Core {4.1.3.Final} juil. 04, 2012 3:38:19 AM org.hibernate.cfg.Environment INFO: HHH000206: hibernate.properties not found juil. 04, 2012 3:38:19 AM org.hibernate.cfg.Environment buildBytecodeProvider INFO: HHH000021: Bytecode provider name : javassist juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000402: Using Hibernate built-in connection pool (not for production use!) juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000115: Hibernate connection pool size: 20 juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000006: Autocommit mode: false juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000401: using driver [org.hibernate.dialect.PostgreSQLDialect] at URL [jdbc:postgresql://localhost:5432/spring] juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000046: Connection properties: {user=Bilal, password=**} juil. 04, 2012 3:38:19 AM org.hibernate.dialect.Dialect INFO: HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect juil. 04, 2012 3:38:19 AM org.hibernate.engine.jdbc.internal.LobCreatorBuilder useContextualLobCreation INFO: HHH000423: Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4 juil. 04, 2012 3:38:19 AM org.hibernate.engine.transaction.internal.TransactionFactoryInitiator initiateService INFO: HHH000399: Using default transaction strategy (direct JDBC transactions) juil. 04, 2012 3:38:19 AM org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory INFO: HHH000397: Using ASTQueryTranslatorFactory juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000228: Running hbm2ddl schema update juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000102: Fetching database metadata juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000396: Updating schema juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000261: Table found: public.course juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000037: Columns: [fee, id, title, end_date, begin_date] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000108: Foreign keys: [] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000126: Indexes: [course_pkey] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000232: Schema update complete juil. 04, 2012 3:38:19 AM org.springframework.beans.factory.support.DefaultSingletonBeanRegistry destroySingletons Infos: Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5a7fed46: defining beans [org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,sessionFactory,transactionManager,courseDao]; root of factory hierarchy juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl stop INFO: HHH000030: Cleaning up connection pool [jdbc:postgresql://localhost:5432/spring] Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager' defined in class path resource [beans-hibernate.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/hibernate/engine/SessionFactoryImplementor at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:139) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:83) at com.boutaya.bill.main.Main.main(Main.java:14) Caused by: java.lang.NoClassDefFoundError: org/hibernate/engine/SessionFactoryImplementor at org.springframework.orm.hibernate3.SessionFactoryUtils.getDataSource(SessionFactoryUtils.java:123) at org.springframework.orm.hibernate3.HibernateTransactionManager.afterPropertiesSet(HibernateTransactionManager.java:411) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452) ... 12 more Caused by: java.lang.ClassNotFoundException: org.hibernate.engine.SessionFactoryImplementor at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 16 more I think the problem is when I use the Class : org.springframework.orm.hibernate3.HibernateTransactionManager ???

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Telerik is First to Announce Support for Microsoft Silverlight Analytics Framework

    Yesterday at MIX 10 conference Microsoft announced the Microsoft Silverlight Analytics Framework Beta. The Silverlight Analytics Framework (SAF) is a new open-source framework to allow designers and developers to integrate web analytics into Silverlight applications in a consistent manner. Supporting out-of-browser and offline scenarios, Microsoft built this framework in conjunction with a number of web analytics services and control vendors to support multiple analytics services simultaneously without degrading application performance. Because the SAF is enabled as a set of behaviors in Microsoft Expression Blend, designers and developers can visually instrument their designs and configure A/B testing rapidly without writing any code. Telerik is proud to be the first control vendor to support the Silverlight Analytics Framework. RadControls for Silverlight can be used with the framework out of the box. The suite offers Silverlight Analytics Framework handlers and behavior, helping developers to fine tune the values sent to the analytics providers. Because the analytics framework is using the Managed Extensibility Framework (MEF) for composition, you don't need to change the way you use the controls to benefit from the Telerik handlers. Just add a reference to the Telerik assemblies that contains the handlers. Here is the code that you need to declare to use RadTreeView: <UserControl x:Class="Telerik.SLAF.MainPage"         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"         xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"         xmlns:ga="clr-namespace:Google.WebAnalytics;assembly=Google.WebAnalytics"         xmlns:sa="clr-namespace:Microsoft.WebAnalytics.Behaviors;assembly=Microsoft.WebAnalytics.Behaviors"         xmlns:ic="clr-namespace:Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions"         xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"         xmlns:telerikNavigation="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation">        <Grid x:Name="LayoutRoot">         <i:Interaction.Behaviors>             <ga:GoogleAnalytics ProfileId="--Your GA ProfileId" Category="Demo" />         </i:Interaction.Behaviors>         <telerikNavigation:RadTreeView>             <i:Interaction.Triggers>                 <i:EventTrigger EventName="SelectionChanged">                     <sa:TrackAction />                 </i:EventTrigger>             </i:Interaction.Triggers>             <telerikNavigation:RadTreeViewItem Header="Item1">             </telerikNavigation:RadTreeViewItem>             <telerikNavigation:RadTreeViewItem Header="Item2" />             <telerikNavigation:RadTreeViewItem Header="Item3" />         </telerikNavigation:RadTreeView>     </Grid> </UserControl> Download the Telerik Microsoft Silverlight Analytics Framework Handlers and the sample project. This is our first Beta release - please drop us a line with any feedback you have or even better if you are at MIX10 - come visit us at the booth in the "Commons" hall so we can discuss it in person. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Problem with initializing a type with WinsdorContainer

    - by the_drow
    public ApplicationView(string[] args) { InitializeComponent(); string configFilePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "log4net.config"); FileInfo configFileInfo = new FileInfo(configFilePath); XmlConfigurator.ConfigureAndWatch(configFileInfo); IConfigurationSource configSource = ConfigurationManager.GetSection("ActiveRecord") as IConfigurationSource; Assembly assembly = Assembly.Load("Danel.Nursing.Model"); ActiveRecordStarter.Initialize(assembly, configSource); WindsorContainer windsorContainer = ApplicationUtils.GetWindsorContainer(); windsorContainer.Kernel.AddComponentInstance<ApplicationView>(this); windsorContainer.Kernel.AddComponent(typeof(ApplicationController).Name, typeof(ApplicationController)); controller = windsorContainer.Resolve<ApplicationController>(); // exception is thrown here OnApplicationLoad(args); } The stack trace is this: Castle.MicroKernel.ComponentActivator.ComponentActivatorException was unhandled Message="ComponentActivator: could not instantiate Danel.Nursing.Scheduling.Actions.DataServices.NurseAbsenceDataService" Source="Castle.MicroKernel" StackTrace: at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateInstance(CreationContext context, Object[] arguments, Type[] signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.DefaultKernel.ResolveComponent(IHandler handler, Type service, IDictionary additionalArguments) at Castle.MicroKernel.DefaultKernel.ResolveComponent(IHandler handler, Type service) at Castle.MicroKernel.DefaultKernel.get_Item(Type service) at Castle.Windsor.WindsorContainer.Resolve(Type service) at Castle.Windsor.WindsorContainer.ResolveT at Danel.Nursing.Scheduling.ApplicationView..ctor(String[] args) in E:\Agile\Scheduling\Danel.Nursing.Scheduling\ApplicationView.cs:line 65 at Danel.Nursing.Scheduling.Program.Main(String[] args) in E:\Agile\Scheduling\Danel.Nursing.Scheduling\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.ArgumentNullException Message="Value cannot be null.\r\nParameter name: types" Source="mscorlib" ParamName="types" StackTrace: at System.Type.GetConstructor(BindingFlags bindingAttr, Binder binder, Type[] types, ParameterModifier[] modifiers) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.FastCreateInstance(Type implType, Object[] arguments, Type[] signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateInstance(CreationContext context, Object[] arguments, Type[] signature) InnerException: It actually says that the type that I'm trying to initialize does not exist, I think. This is the concreate type that it complains about: namespace Danel.Nursing.Scheduling.Actions.DataServices { using System; using Helpers; using Rhino.Commons; using Danel.Nursing.Model; using NHibernate.Expressions; using System.Collections.Generic; using DateUtil = Danel.Nursing.Scheduling.Actions.Helpers.DateUtil; using Danel.Nursing.Scheduling.Actions.DataServices.Interfaces; public class NurseAbsenceDataService : AbstractDataService<NurseAbsence>, INurseAbsenceDataService { NurseAbsenceDataService(IRepository<NurseAbsence> repository) : base(repository) { } //... } } The AbstractDataService only holds the IRepository for now. Anyone got an idea why the exception is thrown?

    Read the article

  • java.lang.NoClassDefFoundError: org/springframework/transaction/interceptor/TransactionInterceptor

    - by user1137146
    I am trying to integrate spring 3.1.1 with hibernate 4.0. This is my dispatcher-servlet.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:lang="http://www.springframework.org/schema/lang" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation="http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.1.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.1.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd"> <context:component-scan base-package="com.future.controllers" /> <context:annotation-config /> <context:component-scan base-package="com.future.services.menu" /> <context:component-scan base-package="com.future.dao" /> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="com.mysql.jdbc.Driver" p:url="jdbc:mysql://localhost:3306/bar_visitor2" p:username="root" p:password=""/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/views/" /> <property name="suffix" value=".jsp" /> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation"> <value>classpath:hibernate.cfg.xml</value> </property> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> When I try to use @Transactional annotation I am getting an error java.lang.NoClassDefFoundError: org/springframework/transaction/interceptor/TransactionInterceptor. I checked my classpath and there is TransactionInterceptor.class. What am I doing wrong? Should I add something? Edit This is my lib folder:

    Read the article

  • No endpoint mapping found for..., using SpringWS, JaxB Marshaller

    - by Saky
    I get this error: No endpoint mapping found for [SaajSoapMessage {http://mycompany/coolservice/specs}ChangePerson] Following is my ws config file: <bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootAnnotationMethodEndpointMapping"> <description>An endpoint mapping strategy that looks for @Endpoint and @PayloadRoot annotations.</description> </bean> <bean class="org.springframework.ws.server.endpoint.adapter.MarshallingMethodEndpointAdapter"> <description>Enables the MessageDispatchServlet to invoke methods requiring OXM marshalling.</description> <constructor-arg ref="marshaller"/> </bean> <bean id="marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="contextPaths"> <list> <value>org.company.xml.persons</value> <value>org.company.xml.person_allextensions</value> <value>generated</value> </list> </property> </bean> <bean id="persons" class="com.easy95.springws.wsdl.wsdl11.MultiPrefixWSDL11Definition"> <property name="schemaCollection" ref="schemaCollection"/> <property name="portTypeName" value="persons"/> <property name="locationUri" value="/ws/personnelService/"/> <property name="targetNamespace" value="http://mycompany/coolservice/specs/definitions"/> </bean> <bean id="schemaCollection" class="org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection"> <property name="xsds"> <list> <value>/DataContract/Person-AllExtensions.xsd</value> <value>/DataContract/Person.xsd</value> </list> </property> <property name="inline" value="true"/> </bean> I have then the following files: public interface MarshallingPersonService { public final static String NAMESPACE = "http://mycompany/coolservice/specs"; public final static String CHANGE_PERSON = "ChangePerson"; public RespondPersonType changeEquipment(ChangePersonType request); } and @Endpoint public class PersonEndPoint implements MarshallingPersonService { @PayloadRoot(localPart=CHANGE_PERSON, namespace=NAMESPACE) public RespondPersonType changePerson(ChangePersonType request) { System.out.println("Received a request, is request null? " + (request == null ? "yes" : "no")); return null; } } I am pretty much new to WebServices, and not very comfortable with annotations. I am following a tutorial on setting up jaxb marshaller in springws. I would rather use xml mappings than annotations, although for now I am getting the error message.

    Read the article

  • Error running java -jar command

    - by dmantamp
    Hi all, I created a jar file using the following ANT script <manifestclasspath property="jar.classpath" jarfile="${bin.dir}/${jar.app.name}" maxparentlevels="0"> <classpath refid="main.class.path" /> </manifestclasspath> <target name="jar"> <mkdir dir="${build.dir}/lib/isp"/> <mkdir dir="${build.dir}/lib/jasper"/> <copy todir="${build.dir}/lib/jasper"> <fileset dir="${lib.jasper.dir}"> <include name="**/*.jar" /> </fileset> </copy> <copy todir="${build.dir}/lib/isp"> <fileset dir="${lib.isp.dir}"> <include name="**/*.jar" /> </fileset> </copy> <jar jarfile="${bin.dir}/${jar.app.name}" index="true" basedir="${classes.dir}" excludes="lib/mytest.jar " > <manifest> <attribute name="Main-Class" value="${main.class}" /> <attribute name="Class-Path" value="${jar.classpath}" /> </manifest> </jar> </target> The resulting jar file has the following MANIFEST.MF entry. Main-Class: dm.jb.Main Class-Path: lib/isp/OfficeLnFs_2.2.jar lib/isp/RXTXcomm.jar lib/isp/ba rbecue-1.0.6d.jar lib/isp/commons-logging-1.1.jar lib/isp/forms-1.0.5 .jar lib/isp/gnujaxp.jar lib/isp/helpUI.jar lib/isp/inspInstaller.jar lib/isp/itext-2.0.1.jar lib/isp/itext-2.0.2.jar lib/isp/jcalendar-1. 3.2.jar lib/isp/jcl.jar lib/isp/jcommon-1.0.10.jar lib/isp/jcommon-1. 0.9.jar lib/isp/jdnc-0_7-all.jar lib/isp/jdnc-runner.jar lib/isp/jdom .jar lib/isp/jfreechart-1.0.6.jar lib/isp/jlfgr-1_0.jar lib/isp/junit .jar lib/isp/log4j-1.2.9.jar lib/isp/looks-1.3.2.jar lib/isp/msbase.j ar lib/isp/mssqlserver.jar lib/isp/msutil.jar lib/isp/mysql-connector When I try to run the command java -jar mytest.jar, it fails and throws error saying dm.jb.Main not found. But I could run the class by specifying the classpath java -classpath dm.jb.Main Please help me DM

    Read the article

  • Does writing data to server using Java URL class require response from server?

    - by gigadot
    I am trying to upload files using Java URL class and I have found a previous question on stack-overflow which explains very well about the details, so I try to follow it. And below is my code adopted from the sniplet given in the answer. My problem is that if I don't make a call to one of connection.getResponseCode() or connection.getInputStream() or connection.getResponseMessage() or anything which is related to reponse from the server, the request will never be sent to server. Why do I need to do this? Or is there any way to write the data without getting the response? P.S. I have developed a server-side uploading servlet which accepts multipart/form-data and save it to files using FileUpload. It is stable and definitely working without any problem so this is not where my problem is generated. import java.io.Closeable; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.OutputStream; import java.io.PrintWriter; import java.net.HttpURLConnection; import java.net.URL; import org.apache.commons.io.IOUtils; public class URLUploader { public static void closeQuietly(Closeable... objs) { for (Closeable closeable : objs) { IOUtils.closeQuietly(closeable); } } public static void main(String[] args) throws IOException { File textFile = new File("D:\\file.zip"); String boundary = Long.toHexString(System.currentTimeMillis()); // Just generate some unique random value. HttpURLConnection connection = (HttpURLConnection) new URL("http://localhost:8080/upslet/upload").openConnection(); connection.setDoOutput(true); connection.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundary); OutputStream output = output = connection.getOutputStream(); PrintWriter writer = writer = new PrintWriter(output, true); // Send text file. writer.println("--" + boundary); writer.println("Content-Disposition: form-data; name=\"file1\"; filename=\"" + textFile.getName() + "\""); writer.println("Content-Type: application/octet-stream"); FileInputStream fin = new FileInputStream(textFile); writer.println(); IOUtils.copy(fin, output); writer.println(); // End of multipart/form-data. writer.println("--" + boundary + "--"); output.flush(); closeQuietly(fin, writer, output); // Above request will never be sent if .getInputStream() or .getResponseCode() or .getResponseMessage() does not get called. connection.getResponseCode(); } }

    Read the article

  • Problem in JSF2 with client-side state saving and serialization

    - by marcel
    I have a problem in JSF2 with client side state saving and serialization. I have created a page with a full description and a small class diagram: http://tinyurl.com/jsf2serial. For the client-side state saving I have to implement Serializable at the classes Search, BackingBean and Connection. The exception that was thrown is: java.io.NotSerializableException: org.apache.commons.httpclient.HttpClient at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1156) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1474) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1474) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1474) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:945) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1461) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1474) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1338) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1146) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:945) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1461) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at com.sun.faces.renderkit.ClientSideStateHelper.doWriteState(ClientSideStateHelper.java:293) at com.sun.faces.renderkit.ClientSideStateHelper.writeState(ClientSideStateHelper.java:167) at com.sun.faces.renderkit.ResponseStateManagerImpl.writeState(ResponseStateManagerImpl.java:123) at com.sun.faces.application.StateManagerImpl.writeState(StateManagerImpl.java:155) at com.sun.faces.application.view.WriteBehindStateWriter.flushToWriter(WriteBehindStateWriter.java:221) at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:397) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:126) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:127) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:313) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:637) Maybe this is a problem of my design, because I am new in developing Java webapps.

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >