Search Results

Search found 2109 results on 85 pages for 'it depends'.

Page 75/85 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Use bug tracker to get things done and manage personal tasks?

    - by Frank
    This is slightly off-topic, but can only be answered by programmers and is useful to many programmers: Do you think it is useful to use a bug tracking system to keep track of personal todo items and to Get Things Done? I have not tried that; in fact, I don't have much experience with bug tracking systems. For my todo lists, I have played around with Google Tasks and Remember The Milk, but both of them have shortcomings: Google Tasks: I like that you can create todo lists easily, can reorder items in the list and easily create hierarchies. But it is way too simplistic and does not allow to tag tasks or move tasks from one list to another. Remember The Milk: It is nice and sleek, but you cannot create hierarchies of tasks, cannot arbitrarily reorder tasks and cannot set dependencies of tasks. That's where a bug tracking system should come in: Since I think (maybe too much?) like a programmer, my tasks have a natural hierarchy and a tree of dependencies, like in a Makefile. Here are two examples: The task of writing my thesis is done when several milestones are done. Some of these milestones can run in parallel (writing background chapter, running experiments A, running experiments B), others depend on each other (writing main chapter depends on first getting results from experiments A). The same is true for more personal goals: I want to host a dinner party, which requires finding a good date, finishing the guest list, making invitations, finding nice recipes, cooking, ... For me, all these tasks involve hierarchical dependencies and milestones that bug tracking systems should be able to handle? Here is an article that explains how to do advanced GTD with Remember The Milk, but he has to use several workarounds: (1) add a general tag 'wait' to tasks that are waiting for others to be completed but you cannot enter the IDs of the tasks that they are waiting for, (2) starting some special tasks with "." so that they are at the top of the alphabetically sorted list and signal that others are 'below' it as subgoals. Bug tracking systems should be able to handle these things much more naturally? Does anyone have experience and can recommend a lightweight bug tracking system that might be good for this? Other requirements: Should run as web app, should allow me to tag a task with several tags (like 'work', 'fun', 'short-task', 'errands', ...).

    Read the article

  • Connection Refused running multiple environments on Selenium Grid 1.04 via Ubuntu 9.04

    - by ReadyWater
    Hello, I'm writing a selenium grid test suite which is going to be run on a series of different machines. I wrote most of it on my macbook but have recently transfered it over to my work machine, which is running ubuntu 9.04. That's actually my first experience with a linux machine, so I may be missing something very simple (I have disabled the firewall though). I haven't been able to get the multienvironment thing working at all, and I've been trying and manual reviewing for a while. Any recommendations and help would be greatly, greatly appreciated! The error I'm getting when I run the test is: [java] FAILED CONFIGURATION: @BeforeMethod startFirstEnvironment("localhost", 4444, "*safari", "http://remoteURL:8080/tutor") [java] java.lang.RuntimeException: Could not start Selenium session: ERROR: Connection refused I thought it might be the mac refusing the connection, but using wireshark I determined that no connection attempt was made on the mac . Here's the code for setting up the session, which is where it seems to be dying @BeforeMethod(groups = {"default", "example"}, alwaysRun = true) @Parameters({"seleniumHost", "seleniumPort", "firstEnvironment", "webSite"}) protected void startFirstEnvironment(String seleniumHost, int seleniumPort, String firstEnvironment, String webSite) throws Exception { try{ startSeleniumSession(seleniumHost, seleniumPort, firstEnvironment, webSite); session().setTimeout(TIMEOUT); } finally { closeSeleniumSession(); } } @BeforeMethod(groups = {"default", "example"}, alwaysRun = true) @Parameters({"seleniumHost", "seleniumPort", "secondEnvironment", "webSite"}) protected void startSecondEnvironment(String seleniumHost, int seleniumPort, String secondEnvironment, String webSite) throws Exception { try{ startSeleniumSession(seleniumHost, seleniumPort, secondEnvironment, webSite); session().setTimeout(TIMEOUT); } finally { closeSeleniumSession(); } } and the accompanying build script used to run the test <target name="runMulti" depends="compile" description="Run Selenium tests in parallel (20 threads)"> <echo>${seleniumHost}</echo> <java classpathref="runtime.classpath" classname="org.testng.TestNG" failonerror="true"> <sysproperty key="java.security.policy" file="${rootdir}/lib/testng.policy"/> <sysproperty key="webSite" value="${webSite}" /> <sysproperty key="seleniumHost" value="${seleniumHost}" /> <sysproperty key="seleniumPort" value="${seleniumPort}" /> <sysproperty key="firstEnvironment" value="${firstEnvironment}" /> <sysproperty key="secondEnvironment" value="${secondEnvironment}" /> <arg value="-d" /> <arg value="${basedir}/target/reports" /> <arg value="-suitename" /> <arg value="Selenium Grid Java Sample Test Suite" /> <arg value="-parallel"/> <arg value="methods"/> <arg value="-threadcount"/> <arg value="15"/> <arg value="testng.xml"/> </java>

    Read the article

  • How to represent different entities that have identical behavior?

    - by Dominik
    I have several different entities in my domain model (animal species, let's say), which have a few properties each. The entities are readonly (they do not change state during the application lifetime) and they have identical behavior (the differ only by the values of properties). How to implement such entities in code? Unsuccessful attempts: Enums I tried an enum like this: enum Animals { Frog, Duck, Otter, Fish } And other pieces of code would switch on the enum. However, this leads to ugly switching code, scattering the logic around and problems with comboboxes. There's no pretty way to list all possible Animals. Serialization works great though. Subclasses I also thought about where each animal type is a subclass of a common base abstract class. The implementation of Swim() is the same for all Animals, though, so it makes little sense and serializability is a big issue now. Since we represent an animal type (species, if you will), there should be one instance of the subclass per application, which is hard and weird to maintain when we use serialization. public abstract class AnimalBase { string Name { get; set; } // user-readable double Weight { get; set; } Habitat Habitat { get; set; } public void Swim(); { /* swim implementation; the same for all animals but depends uses the value of Weight */ } } public class Otter: AnimalBase{ public Otter() { Name = "Otter"; Weight = 10; Habitat = "North America"; } } // ... and so on Just plain awful. Static fields This blog post gave me and idea for a solution where each option is a statically defined field inside the type, like this: public class Animal { public static readonly Animal Otter = new Animal { Name="Otter", Weight = 10, Habitat = "North America"} // the rest of the animals... public string Name { get; set; } // user-readable public double Weight { get; set; } public Habitat Habitat { get; set; } public void Swim(); } That would be great: you can use it like enums (AnimalType = Animal.Otter), you can easily add a static list of all defined animals, you have a sensible place where to implement Swim(). Immutability can be achieved by making property setters protected. There is a major problem, though: it breaks serializability. A serialized Animal would have to save all its properties and upon deserialization it would create a new instance of Animal, which is something I'd like to avoid. Is there an easy way to make the third attempt work? Any more suggestions for implementing such a model?

    Read the article

  • Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers

    - by user193655
    My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server. The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job). Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option. The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too. The idea is to use common code, may be with some compiler directives to write specific code: {$IFDEF CLIENTSERVER} {here goes the thick client specific code} {$ELSE} {here goes the Intraweb specific code} {$ENDIF} Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like: type IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}'] procedure UserAuthentication; end; In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire. My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option? I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50. EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.

    Read the article

  • Including hibernate jar dependencies in ant build

    - by Patrick
    Hi, I'm trying to compile a runnable jar-file for a project that makes use of hibernate. I'm trying to construct an ant build.xml file to streamline my build process, but I'm having troubles with the inclusion of the hibernate3.jar inside the final jar-file. If I run the ant script I manage to include all my library jars, and they are put in the final jar-file's root. When I run the jar-file I get a java.lang.NoClassDefFoundError: org/hibernate/Session error. If I make use of the built-in export to jar in Eclipse, it works only if I choose "extract required libraries into jar". But that bloats the jar, and includes too much of my project (i.e. unit tests). Below is my generated manifest: Manifest-Version: 1.0 Main-Class: main.ServerImpl Class-Path: ./ antlr-2.7.6.jar commons-collections-3.1.jar dom4j-1.6.1.jar hibernate3.jar javassist-3.9.0.GA.jar jta-1.1.jar slf4j-api-1.5.11.jar slf4j-simple-1.5.11.jar mysql-connector-java-5.1.12-bin.jar rmiio-2.0.2.jar commons-logging-1.1.1.jar And the part of the build.xml looks like this: <target name="dist" depends="compile" description="Generates the Distribution Jar(s)"> <mkdir dir="${dist.dir}" /> <jar destfile="${dist.dir}/${dist.file.name}.jar" basedir="${build.prod.dir}" filesetmanifest="mergewithoutmain"> <manifest> <attribute name="Main-Class" value="${main.class}" /> <attribute name="Class-Path" value="./ ${manifest.classpath} " /> <attribute name="Implementation-Title" value="${app.name}" /> <attribute name="Implementation-Version" value="${app.version}" /> <attribute name="Implementation-Vendor" value="${app.vendor}" /> </manifest> <zipfileset refid="hibernatefiles" /> <zipfileset refid="slf4jfiles" /> <zipfileset refid="mysqlfiles" /> <zipfileset refid="commonsloggingfiles" /> <zipfileset refid="rmiiofiles" /> </jar> </target> The refids' for the zipfilesets point to the directories in a library directory lib in the root of the project. The manifest.classpath-variable takes the classpath of all those library jar-files, and flattens them with pathconvert and mapper. I've also tried to set the manifest classpath to ".", "./" and only the library jar, but to no difference at all. I'm hoping there's a simple remedy to my problems...

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • Zend database query result converts column values to null

    - by David Zapata
    Hi again. I am using the next instructions to get some registers from my Database. Create the needed models (from the params module): $obj_paramtype_model = new Params_Model_DbTable_Paramtype(); $obj_param_model = new Params_Model_DbTable_Param(); Getting the available locales from the database // This returns a Zend_Db_Table_Row_Abstract class object $obj_paramtype = $obj_paramtype_model->getParamtypeByValue('available_locales'); // This is a query used to add conditions to the next sentence. This is executed from the Params_Model_DbTable_Param instance class, that depends from Params_Model_DbTable_Paramtype class (reference map and dependentTables arrays are fine in both classes) $obj_select = $this->select()->where('deleted_at IS NULL')->order('name'); // Execute the next query, applying the select restrictions. This returns a Zend_Db_Table_Rowset_Abstract class object. This means "Find Params by Paramtype" $obj_params_rowset = $obj_paramtype->findDependentRowset('Params_Model_DbTable_Param', 'Paramtype', $obj_paramtype); // Here the firebug log displays the queries.... Zend_Registry::get('log')->debug($obj_params_rowset); I have a profiler for all my DB executions from Zend. At this point the log and profiler objects (that includes Firebug writers), shows the executed SQL Queries, and the last line displays the resulting Zend_Db_Table_Rowset_Abstract class object. If I execute the SQL Queries in some MySQL Client, the results are as expected. But the Zend Firebug log writer displays as NULL the column values with latin characters (ñ). In other words, the external SQL client shows es_CO | Español de Colombia and en_US | English of United States but the Query results from Zend displays (and returns) es_CO | null and en_US | English of United States. I've deleted the ñ character from Español de Colombia and the query results are just fine in my Zend Log Firebug screen, and in the final Zend Form element. The MySQL database, tables and columns are in UTF-8 - utf8_unicode_ci collation. All my zend framework pages are in UTF-8 charset. I'm using XAMPP 1.7.1 (PHP 5.2.9, Apache at port 90 and MySQL 5.1.33-community) running on Windows 7 Ultimate; Zend Framework 1.10.1. I'm sorry if there is so much information, but I don't really know why could that happen, so I tryed to provide as much related information as I could to help to find some answer.

    Read the article

  • Translate Java class with static attributes and Annotation to Scala equivalent

    - by ifischer
    I'm currently trying to "translate" the following Java class to an equivalent Scala class. It's part of a JavaEE6-application and i need it to use the JPA2 MetaModel. import javax.persistence.metamodel.SingularAttribute; import javax.persistence.metamodel.StaticMetamodel; @StaticMetamodel(Person.class) public class Person_ { public static volatile SingularAttribute<Person, String> name; } A dissassembling of the compiled class file reveals the following information for the compiled file: > javap Person_.class : public class model.Person_ extends java.lang.Object{ public static volatile javax.persistence.metamodel.SingularAttribute name; public model.Person_(); } So now i need an equivalent Scala file that has the same structure, as JPA depends on it, cause it resolves the attributes by reflection to make them accessible at runtime. So the main problem i think is that the attribute is static, but the Annotation has to be on an (Java)Object (i guess) My first naive attempt to create a Scala equivalent is the following: @StaticMetamodel(classOf[Person]) class Person_ object Person_ { @volatile var name:SingularAttribute[Person, String] = _; } But the resulting classfile is far away from the Java one, so it doesn't work. When trying to access the attributes at runtime, e.g. "Person_.firstname", it resolves to null, i think JPA can't do the right reflection magic on the compiled classfile (the Java variant resolves to an instance of org.hibernate.ejb.metamodel.SingularAttributeImpl at runtime). > javap Person_.class : public class model.Person_ extends java.lang.Object implements scala.ScalaObject{ public static final void name_$eq(javax.persistence.metamodel.SingularAttribute); public static final javax.persistence.metamodel.SingularAttribute name(); public model.Person_(); } > javap Person_$.class : public final class model.Person__$ extends java.lang.Object implements scala.ScalaObject public static final model.Person__$ MODULE$; public static {}; public javax.persistence.metamodel.SingularAttribute name(); public void name_$eq(javax.persistence.metamodel.SingularAttribute); } So now what i'd like to know is if it's possible at all to create a Scala equivalent of the Java class? It seems to me that it's absolutely not, but maybe there is a workaround or something (except just using Java, but i want my app to be in Scala where possible) Any ideas, anyone? Thanks in advance!

    Read the article

  • Undefined Behavior and Sequence Points Reloaded

    - by Nawaz
    Consider this topic a sequel of the following topic: Previous Installment Undefined Behavior and Sequence Points Let's revisit this funny and convoluted expression (the italicized phrases are taken from the above topic *smile* ): i += ++i; We say this invokes undefined-behavior. I presume that when say this, we implicitly assume that type of i is one of built-in types. So my question is: what if the type of i is a user-defined type? Say it's type is Index which is defined later in this post (see below). Would it still invoke undefined-behavior? If yes, why? Is it not equivalent to writing i.operator+=(i.operator ++()); or even syntactically simpler i.add(i.inc());? Or, do they too invoke undefined-behavior? If no, why not? After all, the object i gets modified twice between consecutive sequence points. Please recall the rule of thumb : an expression can modify an object's value only once between consecutive "sequence points. And if i += ++i is an expression, then it must invoke undefined-behavior. If so, then it's equivalents i.operator+=(i.operator ++()); and i.add(i.inc()); must also invoke undefined-behavior which seems to be untrue! (as far as I understand) Or, i += ++i is not an expression to begin with? If so, then what is it and what is the definition of expression? If it's an expression, and at the same time, it's behavior is also well-defined, then it implies that number of sequence points associated with an expression somehow depends on the type of operands involved in the expression. Am I correct (even partly)? By the way, how about this expression? a[++i] = i; //taken from the previous topic. but here type of `i` is Index. class Index { int state; public: Index(int s) : state(s) {} Index& operator++() { state++; return *this; } Index& operator+=(const Index & index) { state+= index.state; return *this; } operator int() { return state; } Index & add(const Index & index) { state += index.state; return *this; } Index & inc() { state++; return *this; } };

    Read the article

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • ServiceController.Stop() doesn't appear to be stopping anything

    - by peacedog
    My dev box is a Windows 7 (x64) machine. I've got some code (C#, .net 2.0) that in certain circumstances, checks to see if a service is running and then stops it. ServiceController matchedService = //My Service! //If statements and such matchedService.Stop(); matchedService.WaitForStatus(ServiceControllerStatus.Stopped); Now, I can verify MyService is in fact installed and running. I can tell you I am running Visual Studio 2008 as an administrator while debugging. I can also verify that after a couple of If statements, I wind up at the .Stop() and .WaitForStatus() portion of the programming. I do know that if step over the .Stop() call, the service itself just keeps running (looking at it in Services, though it occurs to me perhaps I should grab a better tool for this. I'm sure there's some sysinternals tool that might give me more information). As I step over the .WaitForStatus() call, I basically wind up waiting for the stopped status. . . forever. Well, I let it sit there for over 15 minutes yesterday (twice) and nothing happens. We never make it to the next line of code. It feels exactly like Bowie's Space Oddity (you know the part I am talking about). There's a lotta things about MyService you don't know anything about. Things you wouldn't understand. Things you couldn't. . . let me state this plainly. No services depend on MyService and MyService depends on no other services. Addendum MyOtherService and SonOfMyService both seem to behave correctly at this point in the code. All of these services share the same characteristics (they're our own services we hatched in a secret lab and have no dependencies). Is it possible there is something wrong with the MyService install or something? I do know that if I stop debugging at this point, MyService is still listed as running in Services (even after hitting refresh). If I try to restart it then (or run my application again and get to this point), I get a message about it not being able to accept control messages. After that, the service shows up as stopped and I can start it normally. Why isn't the service being stopped? Is this a quirk of win 7? A failing on my part to understand the ServiceController, or Win Services in general?

    Read the article

  • Why does my ko computed observable not update bound UI elements when its value changes?

    - by Allen
    I'm trying to wrap a cookie in a computed observable (which I'll later turn into a protectedObservable) and I'm having some problems with the computed observable. I was under the opinion that changes to the computed observable would be broadcast to any UI elements that have been bound to it. I've created the following fiddle JavaScript: var viewModel = {}; // simulating a cookie store, this part isnt as important var cookie = function () { // simulating a value stored in cookies var privateZipcode = "12345"; return { 'write' : function (val) { privateZipcode = val; }, 'read': function () { return privateZipcode; } } }(); viewModel.zipcode = ko.computed({ read: function () { return cookie.read(); }, write: function (value) { cookie.write(value); }, owner: viewModel }); ko.applyBindings(viewModel);? HTML: zipcode: <input type='text' data-bind="value: zipcode"> <br /> zipcode: <span data-bind="text: zipcode"></span>? I'm not using an observable to store privateZipcode since that's really just going to be in a cookie. I'm hoping that the ko.computed will provide the notifications and binding functionality that I need, though most of the examples I've seen with ko.computed end up using a ko.observable underneath the covers. Shouldn't the act of writing the value to my computed observable signal the UI elements that are bound to its value? Shouldn't these just update? Workaround I've got a simple workaround where I just use a ko.observable along side of my cookie store and using that will trigger the required updates to my DOM elements but this seems completely unnecessary, unless ko.computed lacks the signaling / dependency type functionality that ko.observable has. My workaround fiddle, you'll notice that the only thing that changes is that I added a seperateObservable that isn't used as a store, its only purpose is to signal to the UI that the underlying data has changed. // simulating a cookie store, this part isnt as important var cookie = function () { // simulating a value stored in cookies var privateZipcode = "12345"; // extra observable that isnt really used as a store, just to trigger updates to the UI var seperateObservable = ko.observable(privateZipcode); return { 'write' : function (val) { privateZipcode = val; seperateObservable(val); }, 'read': function () { seperateObservable(); return privateZipcode; } } }(); This makes sense and works as I'd expect because viewModel.zipcode depends on seperateObservable and updates to that should (and does) signal the UI to update. What I don't understand, is why doesn't a call to the write function on my ko.computed signal the UI to update, since that element is bound to that ko.computed? I suspected that I might have to use something in knockout to manually signal that my ko.computed has been updated, and I'm fine with that, that makes sense. I just haven't been able to find a way to accomplish that.

    Read the article

  • Isn't the C++ standard library backward-compatible?

    - by Chris Metzler
    Hi. I'm working on a 64-bit Linux system, trying to build some code that depends on third-party libraries for which I have binaries. During linking, I get a stream of undefined reference errors for one of the libraries, indicating that the linker couldn't resolve references to standard C++ functions/classes, e.g.: librxio.a(EphReader.o): In function `gpstk::EphReader::read_fic_data(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': EphReader.cpp:(.text+0x27c): undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)' EphReader.cpp:(.text+0x4e8): undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)' I'm not really a C++ programmer, but this looks to me like it can't find the standard library. Doing some more research, I got the following when I looked at librxio's dependency for the standard library: $ ldd librxio.so.16.0 ./librxio.so.16.0: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./librxio.so.16.0) libm.so.6 => /lib64/libm.so.6 (0x00002aaaaad45000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002aaaaafc8000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002aaaab2c8000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaab4d7000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) So I read that as saying that librxio (one of the third-party libraries) requires at least v3.4.9 of the standard library. But the version I have installed is 4.1.2: $ rpm -qa | grep libstdc compat-libstdc++-33-3.2.3-61.x86_64 libstdc++-devel-4.1.2-14.el5.i386 libstdc++-devel-4.1.2-14.el5.x86_64 libstdc++-4.1.2-14.el5.x86_64 libstdc++-4.1.2-14.el5.i386 Shouldn't this work? The shared object major number is 6, same as for v3.4.9. At this level, shouldn't this be backward compatible? It seems like the third-party library is looking for an earlier version of the standard library than what I have installed; but isn't there backward compatibility between versions with the same major number for the shared library? Again, I'm not really a C++ programmer; but I don't see what the problem is. Any advice greatly appreciated. Thanks.

    Read the article

  • PInvokeStackImbalance C# call to unmanaged C++ function

    - by user287498
    After switching to VS2010, the managed debug assistant is displaying an error about an unbalanced stack from a call to an unmanaged C++ function from a C# application. The usuals suspects don't seem to be causing the issue. Is there something else I should check? The VS2008 built C++ dll and C# application never had a problem, no weird or mysterious bugs - yeah, I know that doesn't mean much. Here are the things that were checked: The dll name is correct. The entry point name is correct and has been verified with depends.exe - the code has to use the mangled name and it does. The calling convention is correct. The sizes and types all seem to be correct. The character set is correct. There doesn't seem to be any issues after ignoring the error and there isn't an issue when running outside the debugger. C#: [DllImport("Correct.dll", EntryPoint = "SuperSpecialOpenFileFunc", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, ExactSpelling = true)] public static extern short SuperSpecialOpenFileFunc(ref SuperSpecialStruct stuff); [StructLayout(LayoutKind.Sequential, Pack = 1, CharSet = CharSet.Ansi)] public struct SuperSpecialStruct { public int field1; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 256)] public string field2; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 20)] public string field3; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 10)] public string field4; public ushort field5; public ushort field6; public ushort field7; public short field8; public short field9; public uint field10; public short field11; }; C++: short SuperSpecialOpenFileFunc(SuperSpecialStruct * stuff); struct SuperSpecialStruct { int field1; char field2[256]; char field3[20]; char field4[10]; unsigned short field5; unsigned short field6; unsigned short field7; short field8; short field9; unsigned int field10; short field11; }; Here is the error: Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem in 'Managed application path'. Additional Information: A call to PInvoke function 'SuperSpecialOpenFileFunc' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature.

    Read the article

  • Getting Started with Maven + Jaxb project + IntellijIdea

    - by Em Ae
    I am complete new to IntellijIdea and i am looking for some step-by-step process to set up a basic project. My project depends on Maven + Jaxb classes so i need a Maven project so that when i compile this project, the JAXB Objects are generated by Maven plugins. Now i started like this I created a blank project say MaJa project Added Maven Module to it Added following settings in POM.XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>MaJa</groupId> <artifactId>MaJa</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxb2-maven-plugin</artifactId> <executions> <execution> <goals> <goal>xjc</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>${basedir}/src/main/resource/api/MaJa</schemaDirectory> <packageName>com.rimt.shopping.api.web.ws.v1.model</packageName> <outputDirectory>${build.directory}</outputDirectory> </configuration> </plugin> </plugins> </build> </project> First of all, is it right settings ? I tried clicking on Make/Compile 'MaJa' from Project Right Click Menu and it didn't do anything. I will be looking forward to yoru replies.

    Read the article

  • QTreeView memory consumption

    - by Eye of Hell
    Hello. I'm testing QTreeView functionality right now, and i was amazed by one thing. It seems that QTreeView memory consumption depends on items count O_O. This is highly unusual, since model-view containers of such type only keeps track for items being displayed, and rest of items are in the model. I have written a following code with a simple model that holds no data and just reports that it has 10 millions items. With MFC, Windows API or .NET tree / list with such model will take no memory, since it will display only 10-20 visible elements and will request model for more upon scrolling / expanding items. But with Qt, such simple model results in ~300Mb memory consumtion. Increasing number of items will increase memory consumption. Maybe anyone can hint me what i'm doing wrong? :) #include <QtGui/QApplication> #include <QTreeView> #include <QAbstractItemModel> class CModel : public QAbstractItemModel { public: QModelIndex index ( int i_nRow, int i_nCol, const QModelIndex& i_oParent = QModelIndex() ) const { return createIndex( i_nRow, i_nCol, 0 ); } public: QModelIndex parent ( const QModelIndex& i_oInex ) const { return QModelIndex(); } public: int rowCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return i_oParent.isValid() ? 0 : 1000 * 1000 * 10; } public: int columnCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return 1; } public: QVariant data ( const QModelIndex& i_oIndex, int i_nRole = Qt::DisplayRole ) const { return Qt::DisplayRole == i_nRole ? QVariant( "1" ) : QVariant(); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QTreeView oWnd; CModel oModel; oWnd.setUniformRowHeights( true ); oWnd.setModel( & oModel ); oWnd.show(); return a.exec(); }

    Read the article

  • Setting custom UITableViewCells height

    - by Vijayeta
    I am using a custum UITableViewCell , which has some labels , buttons imageviews to be displayed.There is one label in the cell whose text is a NSString object and the length of string could be variable , due to this i cannot set a constant height to the cell in UITableViews : heightForCellAtIndex method.The ceels height depends on the labels height , whcich can be determined using the NSStrings sizeWithFont method . i tried using it , but looks like i m going wrong somewhere . Can anyone help me out in this , adding the code used in iniatilizing the cell if (self = [super initWithFrame:frame reuseIdentifier:reuseIdentifier]) { self.selectionStyle = UITableViewCellSelectionStyleNone; UIImage *image = [UIImage imageNamed:@"dot.png"]; imageView = [[UIImageView alloc] initWithImage:image]; imageView.frame = CGRectMake(45.0,10.0,10,10); headingTxt = [[UILabel alloc] initWithFrame: CGRectMake(60.0,0.0,150.0,post_hdg_ht)]; [headingTxt setContentMode: UIViewContentModeCenter]; headingTxt.text = postData.user_f_name; headingTxt.font = [UIFont boldSystemFontOfSize:13]; headingTxt.textAlignment = UITextAlignmentLeft; headingTxt.textColor = [UIColor blackColor]; dateTxt = [[UILabel alloc] initWithFrame:CGRectMake(55.0,23.0,150.0,post_date_ht)]; dateTxt.text = postData.created_dtm; dateTxt.font = [UIFont italicSystemFontOfSize:11]; dateTxt.textAlignment = UITextAlignmentLeft; dateTxt.textColor = [UIColor grayColor]; NSString * text1 = postData.post_body; NSLog(@"text length = %d",[text1 length]); CGRect bounds = [UIScreen mainScreen].bounds; CGFloat tableViewWidth; CGFloat width = 0; tableViewWidth = bounds.size.width/2; width = tableViewWidth - 40; //fudge factor //CGSize textSize = {width, 20000.0f}; //width and height of text area CGSize textSize = {245.0, 20000.0f}; //width and height of text area CGSize size1 = [text1 sizeWithFont:[UIFont systemFontOfSize:11.0f] constrainedToSize:textSize lineBreakMode:UILineBreakModeWordWrap]; CGFloat ht = MAX(size1.height, 28); textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; textView.text = postData.post_body; textView.font = [UIFont systemFontOfSize:11]; textView.textAlignment = UITextAlignmentLeft; textView.textColor = [UIColor blackColor]; textView.lineBreakMode = UILineBreakModeWordWrap; textView.numberOfLines = 3; textView.autoresizesSubviews = YES; [self.contentView addSubview:imageView]; [self.contentView addSubview:textView]; [self.contentView addSubview:webView]; [self.contentView addSubview:dateTxt]; [self.contentView addSubview:headingTxt]; [self.contentView sizeToFit]; [imageView release]; [textView release]; [webView release]; [dateTxt release]; [headingTxt release]; } textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; this is the label whose height and widh are going wrong

    Read the article

  • iPhone Image Resources, ICO vs PNG, app bundle filesize

    - by Jasarien
    My application has a collection of around 1940 icons that are used throughout. They're currently in ICO and new images provided to me come in ICO format too. I have noticed that they contain a 16x16 and 32x32 representation of each icon in one file. Each file is roughly 4KB in filesize (as reported by finder, but ls reports that they vary from being ~1000 bytes to 5000 bytes) A very small number of these icons only contain the 32x32 representation, and as a result are only around 700 bytes in size. Currently I am bundling these icons with my application and they are inflating the size of the app a bit more than I would like. Altogether, the images total just about 25.5MB. Xcode must do some kind of compression because the resulting app bundle is about 12.4MB. Compressing this further into a ZIP (as it would be when submitted to the App Store), results in a final file of 5.8MB. I'm aware that the maximum limit for over the air App Store downloads has been raised to 20MB since the introduction of the iPad (I'm not sure if that extends to iPhone apps as well as iPad apps though, if not the limit would be 10MB). My worry is that new icons are going to be added (sometimes up to 10 icons per week), and will continue to inflate the app bundle over time. What is the best way to distribute these icons with my app? Things I've tried and not had much success with: Converting the icons from ICO to PNG: I tried this in the hopes that the pngcrush utility would help out with the filesize. But it appears that it doesn't make much of a difference between a normal PNG and a crushed png (I believe it just optimises the image for display on the iPhone's GPU rather than compress it's size). Also in going from ICO to PNG actually increased the size of the icon file... Zipping the images, and then uncompressing them on first run. While this did reduce the overall image sizes, I found that the effort needed to unzip them, copy them to the documents folder and ensure that duplication doesn't happen on upgrades was too much hassle to be worth the benefit. Also, on original and 3G iPhones unzipping and copying around 25MB of images takes too long and creates a bad experience... Things I've considered but not yet tried: Instead of distributing the icons within the app bundle, host them online, and download each icon on demand (it depends on the user's data as to which icons will actually be displayed and when). Issues with this is that bandwidth costs money, and image downloads will be bandwidth intensive. However, my app currently has a small userbase of around 5,500 users (of which I estimate around 1500 to be active based on Flurry stats), and I have a huge unused bandwidth allowance with my current hosting package. So I'm open to thoughts on how to solve this tricky issue.

    Read the article

  • Estimating the boundary of arbitrarily distributed data

    - by Dave
    I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it. Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch. My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists. OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg: from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx y=ymin+dy, do 1 do 1-2, but now sample in y An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments. Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary? A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon. The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now? Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really! Cheers, David

    Read the article

  • Dependency Injection: I don't get where to start!

    - by Andy
    I have several articles about Dependency Injection, and I can see the benefits, especially when it comes to unit testing. The units can me loosely coupled, and mocking of dependencies can be made. The trouble is - I just don't get where to start. Consider this snippet below of (much edited for the purpose of this post) code that I have. I am instantiating a Plc object from the main form, and passing in a communications mode via the Connect method. In it's present form it becomes hard to test, because I can't isolate the Plc from the CommsChannel to unit test it. (Can I?) The class depends on using a CommsChannel object, but I am only passing in a mode that is used to create this channel within the Plc itself. To use dependancy injection, I should really pass in an already created CommsChannel (via an 'ICommsChannel' interface perhaps) to the Connect method, or maybe via the Plc constructor. Is that right? But then that would mean creating the CommsChannel in my main form first, and this doesn't seem right either, because it feels like everything will come back to the base layer of the main form, where everything begins. Somehow it feels like I am missing a crucial piece of the puzzle. Where do you start? You have to create an instance of something somewhere, but I'm struggling to understand where that should be. public class Plc() { public bool Connect(CommsMode commsMode) { bool success = false; // Create new comms channel. this._commsChannel = this.GetCommsChannel(commsMode); // Attempt connection success = this._commsChannel.Connect(); return this._connected; } private CommsChannel GetCommsChannel(CommsMode mode) { CommsChannel channel; switch (mode) { case CommsMode.RS232: channel = new SerialCommsChannel( SerialCommsSettings.Default.ComPort, SerialCommsSettings.Default.BaudRate, SerialCommsSettings.Default.DataBits, SerialCommsSettings.Default.Parity, SerialCommsSettings.Default.StopBits); break; case CommsMode.Tcp: channel = new TcpCommsChannel( TCPCommsSettings.Default.IP_Address, TCPCommsSettings.Default.Port); break; default: // Throw unknown comms channel exception. } return channel; } }

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • Spring constructor injection error

    - by Jeune
    I am getting the following error for a bean in my application context: Related cause: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'businessLogicContext' d efined in class path resource [activemq-jms-consumer.xml]: Unsatisfied dependency expressed through constructor argument with index 0 of type [java.lang.String]: Could not convert constructor argument value of type [java.util.ArrayList] to required type [java.lang.String]: Failed to convert value of type [java.util.ArrayList] to required type [java.lang.Stri ng]; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [java.util.ArrayList] to requi red type [java.lang.String]: no matching editors or conversion strategy found at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:53 4) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:18 6) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAuto wireCapableBeanFactory.java:855) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow ireCapableBeanFactory.java:765) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap ableBeanFactory.java:412) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBea nFactory.java:383) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab leBeanFactory.java:353) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:245) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis try.java:169) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:242) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListable BeanFactory.java:400) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplic ationContext.java:736) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:369) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java :123) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java :66) Here is my bean: <bean id="businessLogicContext" class="org.springframework.context.support.ClassPathXmlApplicationContext" depends-on="resolveProperty"> <constructor-arg index="0"> <list> <value>jms-applicationContext.xml</value> <value>jms-managerBeanContext.xml</value> <value>jms-daoContext.xml</value> <value>jms-serviceContext.xml</value> </list> </constructor-arg> </bean> I don't know what's wrong, I have googled how to inject a string array via constructor injection and the way I do it above seems okay.

    Read the article

  • How to remove the explicit dependencies to other projects' libraries in Eclipse launch configuration

    - by euluis
    In Eclipse it is possible to create launch configurations in a project, specifying the runtime dependencies from another project. A problem I found was that if you have a multiple project workspace, being possible that each project has its own libraries, it is easy to add explicit dependencies in a secondary project to libraries that are of another project and therefore subject to change. An example of this problem follows: proj1 +-- src +-- lib +-- jar1-v1.0.jar +-- jar2-v1.0.jar proj2 +-- src +-- proj2-tests.launch I don't have a dependency from the code in proj2/src to the libraries in proj1/lib. Nevertheless, I do have a dependency from proj2/src to proj1/src, although since there is an internal dependency in the code in proj1/src to its libraries jar1-v1.0.jar and jar2.v1.0.jar, I have to add a dependency in proj2-tests.launch to the libraries in proj1/lib. This translates to the following ugly lines in proj2-tests.launch: <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry path="3" projectName="proj1" type="1"/> "/> <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry internalArchive="/proj1/lib/jar1-v1.0.jar" path="3" type="2"/> "/> <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry internalArchive="/proj1/lib/jar2-v1.0.jar" path="3" type="2"/> "/> This wouldn't be a big problem if there wasn't the need from time to time to evolve the software, upgrade the libraries and etc. Consider the common need to upgrade the libraries jar1-v1.0.jar and jar2-v1.0.jar to their versions v1.1. Consider that you have about 10 projects in one workspace, having about 5 libraries each and about 4 launch configurations. You get a maintenance overhead in doing a simple upgrade of a library, which normally must imply changes in files for which there wasn't the need for. Or maybe I'm doing something wrong... What I would like to state is proj2 depends on proj1 and on its libraries and having this translated to simply that in the *.launch files. Is that possible?

    Read the article

  • How do you create a MANIFEST.MF that's available when you're testing and running from a jar in produ

    - by warvair
    I've spent far too much time trying to figure this out. This should be the simplest thing and everyone who distributes Java applications in jars must have to deal with it. I just want to know the proper way to add versioning to my Java app so that I can access the version information when I'm testing, e.g. debugging in Eclipse and running from a jar. Here's what I have in my build.xml: <target name="jar" depends = "compile"> <property name="version.num" value="1.0.0"/> <buildnumber file="build.num"/> <tstamp> <format property="TODAY" pattern="yyyy-MM-dd HH:mm:ss" /> </tstamp> <manifest file="${build}/META-INF/MANIFEST.MF"> <attribute name="Built-By" value="${user.name}" /> <attribute name="Built-Date" value="${TODAY}" /> <attribute name="Implementation-Title" value="MyApp" /> <attribute name="Implementation-Vendor" value="MyCompany" /> <attribute name="Implementation-Version" value="${version.num}-b${build.number}"/> </manifest> <jar destfile="${build}/myapp.jar" basedir="${build}" excludes="*.jar" /> </target> This creates /META-INF/MANIFEST.MF and I can read the values when I'm debugging in Eclipse thusly: public MyClass() { try { InputStream stream = getClass().getResourceAsStream("/META-INF/MANIFEST.MF"); Manifest manifest = new Manifest(stream); Attributes attributes = manifest.getMainAttributes(); String implementationTitle = attributes.getValue("Implementation-Title"); String implementationVersion = attributes.getValue("Implementation-Version"); String builtDate = attributes.getValue("Built-Date"); String builtBy = attributes.getValue("Built-By"); } catch (IOException e) { logger.error("Couldn't read manifest."); } } But, when I create the jar file, it loads the manifest of another jar (presumably the first jar loaded by the application - in my case, activation.jar). Also, the following code doesn't work either although all the proper values are in the manifest file. Package thisPackage = getClass().getPackage(); String implementationVersion = thisPackage.getImplementationVersion(); Any ideas?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >