Search Results

Search found 3122 results on 125 pages for 'dependency replicator'.

Page 58/125 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • How to use InterfaceInterceptor with NServiceBus UnityBuilder

    - by David
    I have a MessageHandler with a dependency declared as: public IRepository<Person> PersonRepository {get;set;} I set up my Unity container with: IUnityContainer container = new UnityContainer(); container.AddNewExtension<Interception>(); container.RegisterType(typeof(IRepository<Person>), typeof(Example.Data.InMemory.Repository<Person>)); container.Configure<Interception>() .AddPolicy("PersonAddAdvice") .AddMatchingRule(new AddMatchingRule()) .AddCallHandler(new PersonAddedEventPublishingCallHandler()); container.Configure<Interception>() .SetInterceptorFor<IRepository<Person>>(new InterfaceInterceptor()); Then I Init my endpoint with: class EndpointConfig : IConfigureThisEndpoint, AsA_Publisher, IWantCustomInitialization { public void Init() { NServiceBus.Configure.With() .Log4Net() .UnityBuilder(ContainerConfig.LoadContainer()) .XmlSerializer(); } } (ContainerConfig.LoadContainer() returns the pre-configured IUnityContainer) the problem is that the IRepository dependency in my MessageHandler is not resolved (it's null when handling the messages). If I remove the interception configuration from the Unity container, and simply do this: IUnityContainer container = new UnityContainer(); container.RegisterType(typeof(IRepository<Person>), typeof(Example.Data.InMemory.Repository<Person>)); It works as expected, and the MessageHandler has an instance if IRepository resolved.

    Read the article

  • Opencv application crashes at runtime with error code 0x0000142

    - by Tuan Anh
    I have openCV and minGW installed with codeblock IDE following the instructions found here http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw/ i tried the simple image loading program in the article and the build process went fine. but when i tried running the output program, it crashes with the error message "the application was unable to start correctly (0xc0000142). Click OK to close the application." I used Dependency Walker to see if the program failed to load dll module and here's the output screen of Dependency Walker https://www.dropbox.com/s/f9iaftdt8atjwpl/Screenshot%202013-11-05%2022.21.45.png i am not used to DW but as i can see in its output screen, some openCV dll failed to load and the loaded Windows DLL were 64 bit instead of 32 bit (as minGW is 32 bit). I can't figure out why as i already configure the Path environment variable for the bin directory of openCV and the app still can not load the dll modules. And i think that Windows will automatically load the proper 32 bit DLLs when a 32 bit app is run but this situation the app still failed to load. Anyone has ideas?

    Read the article

  • How to deal with recursive dependencies between static libraries using the binutils linker?

    - by Jack Lloyd
    I'm porting an existing system from Windows to Linux. The build is structured with multiple static libraries. I ran into a linking error where a symbol (defined in libA) could not be found in an object from libB. The linker line looked like g++ test_obj.o -lA -lB -o test The problem of course being that by the time the linker finds it needs the symbol from libA, it has already passed it by, and does not rescan, so it simply errors out even though the symbol is there for the taking. My initial idea was of course to simply swap the link (to -lB -lA) so that libA is scanned afterwards, and any symbols missing from libB that are in libA are picked up. But then I find there is actually a recursive dependency between libA and libB! I'm assuming the Visual C++ linker handles this in some way (does it rescan by default?). Ways of dealing with this I've considered: Use shared objects. Unfortunately this is undesirable from the perspective of requiring PIC compliation (this is performance sensitive code and losing %ebx to hold the GOT would really hurt), and shared objects aren't needed. Build one mega ar of all of the objects, avoiding the problem. Restructure the code to avoid the recursive dependency (which is obviously the Right Thing to do, but I'm trying to do this port with minimal changes). Do you have other ideas to deal with this? Is there some way I can convince the binutils linker to perform rescans of libraries it has already looked at when it is missing a symbol?

    Read the article

  • axis2 maven example

    - by larrycai
    I try to use axis2 (1.5.1) version to generate java codes from wsdl files, but I can't figure out what is the correct pom.xml <build> <plugins> <plugin> <groupId>org.apache.axis2</groupId> <artifactId>axis2-wsdl2code-maven-plugin</artifactId> <version>1.5.1</version> <executions> <execution> <goals> <goal>wsdl2code</goal> </goals> <configuration> <wsdlFile>src/main/resources/wsdl/stockquote.wsdl</wsdlFile> <databindingName>xmlbeans</databindingName> <packageName>a.bc</packageName> </configuration> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.apache.axis2</groupId> <artifactId>axis2</artifactId> <version>1.5.1</version> </dependency> </dependencies> when I type mvn compile, it complains the Retrieving document at 'src/main/resources/wsdl/stockquote.wsdl'. java.lang.ClassNotFoundException: org.apache.xml.serializer.TreeWalker And if i try to find the TreeWalker, it is a mess to find a suitable jar files. can u someone give me a hints ? or give me correct pom.xml

    Read the article

  • Abusing the word "library"

    - by William Pursell
    I see a lot of questions, both here on SO and elsewhere, about "maintaining common libraries in a VCS". That is, projects foo and bar both depend on libbaz, and the questioner is wondering how they should import the source for libbaz into the VCS for each project. My question is: WTF? If libbaz is a library, then foo doesn't need its source code at all. There are some libraries that are reasonably designed to be used in this manner (eg gnulib), but for the most part foo and bar ought to just link against the library. I guess my thinking is: if you cut-and-paste source for a library into your own source tree, then you obviously don't care about future updates to the library. If you care about updates, then just link against the library and trust the library maintainers to maintain a stable API. If you don't trust the API to remain stable, then you can't blindly update your own copy of the source anyway, so what is gained? To summarize the question: why would anyone want to maintain a copy of a library in the source code for a project rather than just linking against that library and requiring it as a dependency? If the only answer is "don't want the dependency", then why not just distribute a copy of the library along with your app, but keep them totally separate?

    Read the article

  • Overriding MSBuildExtensionsPath in the MSBuild task is flaky

    - by Stuart Lange
    This is already cross-posted at MS Connect: https://connect.microsoft.com/VisualStudio/feedback/details/560451 I am attempting to override the property $(MSBuildExtensionsPath) when building a solution containing a C# web application project via msbuild. I am doing this because a web application csproj file imports the file "$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets". This file is installed by Visual Studio to the standard $(MSBuildExtensionsPath) location (C:\Program Files\MSBuild). I would like to eliminate the dependency on this file being installed on the machine (I would like to keep my build servers as "clean" as possible). In order to do this, I would like to include the Microsoft.WebApplication.targets in source control with my project, and then override $(MSBuildExtensionsPath) so that the csproj will import this included version of Microsoft.WebApplication.targets. This approach allows me to remove the dependency without requiring me to manually modify the web application csproj file. This scheme works fine when I build my solution file from the command line, supplying the custom value of $(MSBuildExtensionsPath) at the command line to msbuild via the /p flag. However, if I attempt to build the solution using the MSBuild task in a custom msbuild project file (overriding MSBuildExtensionsPath using the "Properties" attribute), it fails because the web app csproj file is attempting to import the Microsoft.WebApplication.targets from the "standard" Microsoft.WebApplication.targets location (C:\Program Files\MSBuild). Notably, if I run msbuild using the "Exec" task in my custom project file, it works. Even more notably, the FIRST time I run the build using the "MSBuild" task AFTER I have run the build using the "EXEC" task (or directly from the command line), the build works. Has anyone seen behavior like this before? Am I crazy? Is anyone aware of the root cause of this problem, a possible workaround, or whether this is a legitimate bug in MSBuild?

    Read the article

  • Android App Build system differences between Eclipse and Ant?

    - by Amy Winarske
    The Eclipse build for my 1.6 application project is succeeding and the Ant build is failing. I'm looking for help on why they aren't behaving the same way. We are developing on Mac OSX 10.5.8 with Eclipse 3.5 against SDK 1.6 + Google APIs. There are no setting changes in Eclipse, either at workspace or project level. Similarly, our ant is also a vanilla- flavored unmodified installation of 1.7.1. JDK is 1.5.0_22. The CLASSPATH environment variable is not set. JAVA_HOME is /Library/Java/ Home The application was initially created by a team member using the Eclipse plugins. The application references two jar files, one of which has a dependency on javax.xml.bind.annotation.XmlSeeAlso, which is not defined anywhere in our code or in android.jar. The other jar file has an explicit dependency on android.jar. I generated the Ant build file using android update. The Eclipse project builds an apk and runs the application in the emulator. I think this is incorrect behavior. The Android ant project fails to build. I think this is correct behavior. MyClass.java:98: cannot access javax.xml.bind.annotation.XmlSeeAlso [javac] file javax/xml/bind/annotation/XmlSeeAlso.class not found Any ideas as to why the two build methods are behaving differently? I would expect them both to fail. Thanks! -Amy

    Read the article

  • Project Architecture For Enhancing Legacy project.

    - by vijay.shad
    I am working on legacy project. Now the situation demands project to be divided into parts. What strategy I should follow to do this task. Description: The legacy project (A) is fully functional web application with almost well defined layers. But now i need to extend the project to a further enhancement. This project usage maven as build tool. But it is used only for dependency managements only. (project exported to war form inside eclipse). The new enhancement needs me to add new data table, new UI(jsp, css, js and images). What should be my strategy to enhance to application. My proposed design. I am planing to create two new projects Project B : Main Enhancement works will done in this project. Will have all layers like service layer, dao layer and UI layer in itself. And will be a web application in itself. Project C : Extract some common model and service code form project-A and create this project. This project will be added as dependency to both the projects. If my this approach is okay! Then i presume there will be problem be problem in deployment. These two projects will demand to deploy separately(currently tomcat is used). But I must deploy these two projects as one war. So, i need to have a plan to change the web.xml entries to have configurations for both projects. This will comes with some more complexities with project. What should be my design for the project? Does my plan sounds good.

    Read the article

  • How to invalidate layout of listbox from custom children

    - by Stephen Price
    I have a custom panel for a listbox <ItemsPanelTemplate x:Key="FloatPanelTemplate"> <Controls:FloatPanel x:Name="CardPanel" /> </ItemsPanelTemplate> The panel lays out its children using its X and Y dependency properties. This all works nicely when the FloatPanel is used by itself - I'm using FrameworkPropertyMetadataOptions.AffectsArrange | FrameworkPropertyMetadataOptions.AffectsMeasure on the dependency properties of the child items to tell the FloatPanel to redraw its layout. When I use it in a Listbox (code above) then it draws fine the first time, but when I drag the children (which modifies the item's X and Y) it is not notifying the Listbox that it needs to redraw the FloatPanel's children. I think the issue is related to the fact that each item in the bound collection is wrapped with a ListBoxItem. Hopefully I've described what i'm doing well enough that someone can tell me how to make the panel (or its children) tell it needs to do the Layout routines again. As I said it works once (initial draw) but then dragging items doesn't work (Listbox isnt aware that its children have changed and needs to relayout.) If I drag an item and then resize the window, the listbox does a layout and the items are drawn in their new locations. How do I notify the ListBox (or more importantly the FloatPanel in the ItemsPanelTemplate) that it needs to do a Layout pass?

    Read the article

  • Techniques for sharing a value among classes in a program

    - by Kenneth Cochran
    I'm using Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) + "\MyProgram" As the path to store several files used by my program. I'd like to avoid pasting the same snippet of code all over the my applcation. I need to ensure that: The path cannot be accidentally changed once its been set The classes that need it have access to it. I've considered: Making it a singleton Using constructor dependency injection Using property dependency injection Using AOP to create the path where its needed. Each has pros and cons. The singleton is everyone's favorite whipping boy. I'm not opposed to using one but there are valid reasons to avoid it if possible. I'm already heavily using constructor injection through Castle Windsor. But this is a path string and Windsor doesn't handle system type dependencies very gracefully. I could always wrap it in a class but that seems like overkill for something as simple as a passing around a string value. In any case this route would add yet another constructor argument to each class where it is used. The problem I see with property injection in this case is that there is a large amount of indirection from the where the value is set to where it is needed. I would need a very long line of middlemen to reach all the places where its used. AOP looks promising and I'm planning on using AOP for logging anyway so this at least sounds like a simple solution. Is there any other options I haven't considered? Am I off base with my evaluation of the options I have considered?

    Read the article

  • OO Design / Patterns - Fat Model Vs Transaction Script?

    - by ben
    Ok, 'Fat' Model and Transaction Script both solve design problems associated with where to keep business logic. I've done some research and popular thought says having all business logic encapsulated within the model is the way to go (mainly since Transaction Script can become really complex and often results in code duplication). However, how does this work if I want to use the TDG of a second Model in my business logic? Surely Transaction Script presents a neater, less coupled solution than using one Model inside the business logic of another? A practical example... I have two classes: User & Alert. When pushing User instances to the database (eg, creating new user accounts), there is a business rule that requires inserting some default Alerts records too (eg, a default 'welcome to the system' message etc). I see two options here: 1) Add this rule as a User method, and in the process create a dependency between User and Alert (or, at least, Alert's Table Data Gateway). 2) Use a Transaction Script, which avoids the dependency between models. (Also, means the business logic is kept in a 'neutral' class & easily accessible by Alert. That probably isn't too important here, though). User takes responsibility for it's own validation etc, however, but because we're talking about a business rule involving two Models, Transaction Script seems like a better choice to me. Anyone spot flaws with this approach?

    Read the article

  • jsprf.c:644: error: incompatible types in assignment

    - by giantKamote
    Hey guys, Can you help me with this error I encountered while building Spidermonkey in PPC? make -f Makefile.ref cat: ../../dist/Linux_All_DBG.OBJ/nspr/Version: No such file or directory cd editline; make -f Makefile.ref all make[1]: Entering directory `/units/ spidermonkey-1.8-next-wip/src/editline' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/units/ spidermonkey-1.8-next-wip/src/editline' make -f Makefile.ref Linux_All_DBG.OBJ/libjs.a Linux_All_DBG.OBJ/ libjs.so Linux_All_DBG.OBJ/js Linux_All_DBG.OBJ/jsautocfg.h Linux_All_DBG.OBJ/jscpucfg Linux_All_DBG.OBJ/jscpucfg.o cat: ../../dist/Linux_All_DBG.OBJ/nspr/Version: No such file or directory make[1]: Entering directory `/units/ spidermonkey-1.8-next-wip/src' make[1]: Circular jscpucfg.h <- Linux_All_DBG.OBJ/jsautocfg.h dependency dropped. make[1]: Circular Linux_All_DBG.OBJ/jsautocfg.h <- Linux_All_DBG.OBJ/ jsautocfg.h dependency dropped. /powerpc-750- linux-gnu_gcc-3.4.6/bin/powerpc-750-linux-gnu-gcc -o Linux_All_DBG.OBJ/ jsprf.o -c -Wall -Wno-format -MMD -DGCC_OPT_BUG -g3 -DXP_UNIX -DSVR4 - DSYSV -D_BSD_SOURCE -DPOSIX_SOURCE -DHAVE_LOCALTIME_R -DX86_LINUX - DDEBUG -DDEBUG_build -DEDITLINE -ILinux_All_DBG.OBJ jsprf.c jsprf.c: In function `BuildArgArray': jsprf.c:644: error: incompatible types in assignment make[1]: *** [Linux_All_DBG.OBJ/jsprf.o] Error 1 make[1]: Leaving directory `/units/ spidermonkey-1.8-next-wip/src' make: *** [all] Error 2 I'm using a Redhat-Linux machine. Do I need to have NSPR too to cross-compile spidermonkey? Thanks a lot!!

    Read the article

  • Need some help understanding this problem about maximizing graph connectivity

    - by Legend
    I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually. Problem I am trying to solve: 1. Constructing the dependency graph Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that node 3 depends on node 4 node 2 depends on node 3 node 3 depends on node 5 But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system. 2. Execute the request order Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. I am not sure if this is a really a problem but I somehow have a feeling that there might exist more than one order in which case, it is required to choose the best order. First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions? PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.

    Read the article

  • How do you configure an SDL Tridion CME extension for a subset of views?

    - by Chris Summers
    I have created a new editor for SDL Tridion which adds some new functionality to the ribbon bar. This is enabled by adding the following snippet to the editor.config <!-- ItemCommenting PowerTool --> <ext:extension assignid="ItemCommenting" name="Save and&lt;br/&gt;Comment" pageid="HomePage" groupid="ManageGroup" insertbefore="SaveCloseBtn"> <ext:command>PT_ItemCommenting</ext:command> <ext:title>Save and Comment</ext:title> <ext:issmallbutton>false</ext:issmallbutton> <ext:dependencies> <cfg:dependency>PowerTools.Commands</cfg:dependency> </ext:dependencies> <ext:apply> <ext:view name="*" /> </ext:apply> </ext:extension> This is applied to all views by using a wildcard value in the node. This has results in my new button being added to the ribbon of every view, including the main dashboard. Is there a way to add this to all views except for the dashboard? Or do I have to create something like this? <ext:apply> <ext:view name="PageView" /> <ext:view name="ComponentView" /> <ext:view name="SchemaView" /> </ext:apply> If this is the only way to achieve the result I need, is there a list of all the view names somewhere?

    Read the article

  • eclipse, one classpath for compiling, another for launching

    - by DragonFax
    example: For logging, my code uses log4j. but other jars my code is dependent upon, uses slf4j instead. So both jars must be in the build path. Unfortunately, its possible for my code to directly use (depend on) slf4j now, either by context-assist, or some other developers changes. I would like any use of slf4j to show up as an error, but my application (and tests) will still need it in the classpath when running. explanation: I'd like to find out if this is possible in eclipse. This scenario happens often for me. I'll have a large project, that uses alot of 3rd party libraries. And of course those 3rd party jars have their own dependencies as well. So I have to include all dependencies in the classpath ("build path" in eclipse) for the application and its tests to compile and run (from within eclipse). But I don't want my code to use all of those jars, just the few direct dependencies I've decided upon myself. So if my code accidentally uses a dependency of a dependency, I want it to show up as a compilation error. Ideally, as class not found, but any error would do. I know I can manually configure the classpath when running outside of eclipse, and even within eclipse I can modify the classpath for a specific class I'm running (in the run configurations), but thats not manageable if you run alot of individual test cases, or have alot of main() classes.

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • How to control the order of module initialization in Prism

    - by Robert Taylor
    I'm using Prism V2 with a DirectoryModuleCatalog and I need the modules to be initialized in a certain order. The desired order is specified with an attribute on each IModule implementation. This is so that as each module is initialized, they add their View into a TabControl region and the order of the tabs needs to be deterministic and controlled by the module author. The order does not imply a dependency, but rather just an order that they should be initialized in. In other words: modules A, B, and C may have priorities of 1, 2, and 3 respectively. B does not have a dependency on A - it just needs to get loaded into the TabControl region after A. So that we have a deterministic and controllable order of the tabs. Also, B might not exist at runtime; so they would load as A, C because the priority should determine the order (1, 3). If i used the ModuleDependency, then module "C" will not be able to load w/o all of it's dependencies. I can manage the logic of how to sort the modules, but i can't figure out where to put said logic.

    Read the article

  • How to keep your unit test Arrange step simple and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • Can I use my Ninject .NET project within Orchard CMS?

    - by Mattias Z
    I am creating a website using Orchard CMS and I have an external .NET project written with Ninject for dependency injection which I would like to use together with a module within Orchard CMS. I know that Orchard uses Autofac for dependency injection and this is causing me problems since I never worked with DI before. I have created a Autofac module UserModule which registers the source UserRegistrationSource and I configured Orchard to load the module like this: OrchardStarter.cs .... builder.RegisterModule(new UserModule()); .... UserModule.cs public class UserModule : Module { protected override void Load(ContainerBuilder builder) { builder.RegisterSource(new UserRegistrationSource()); base.Load(builder); } } UserRegistrationSource.cs public class UserRegistrationSource : IRegistrationSource { public bool IsAdapterForIndividualComponents { get { return false; } } public IEnumerable<IComponentRegistration> RegistrationsFor(Service service, Func<Service, IEnumerable<IComponentRegistration>> registrationAccessor) { var serviceWithType = service as IServiceWithType; if (serviceWithType == null) yield break; var serviceType = serviceWithType.ServiceType; if (!serviceType.IsInterface || !typeof(IUserServices).IsAssignableFrom(serviceType) || serviceType == typeof(IUserServices)) yield break; var registrationBuilder = ... yield return registrationBuilder.CreateRegistration(); } } But now I'm stuck... Should I somehow try to resolve the dependencies in UserRegistrationSource by getting the requested types from the Ninject container (kernel) with Autofac or is this the wrong approach? Or should the Ninject kernel do the resolves for Autofac for the external project?

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • How to setup Eclipselink with JPA?

    - by deamon
    The Eclipselink documentation says that I need the following entries in my pom.xml to get it with Maven: <dependencies> <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>eclipselink</artifactId> <version>2.0.0</version> <scope>compile</scope> ... </dependency> <dependencies> ... <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> </repository> ... </repositories> But when I try to use @Entity annotation NetBeans tells me, that the class cannot be found. And indeed: there is no Entity class in the javax.persistence package from Eclipselink. How do I have to setup Eclipselink with Maven?

    Read the article

  • How to keep your unit tests simple and isolated and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • Need some help understanding this problem

    - by Legend
    I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually. Problem I am trying to solve: 1. Constructing the dependency graph Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that node 3 depends on node 4 node 2 depends on node 3 node 3 depends on node 5 But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system. 2. Execute the request order Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions? PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.

    Read the article

  • I want to run two or more procedures in parallel

    - by binod gyawali
    I have list of procedures. All procedures are not dependent upon each other. So, I need to do is, to run the independent procedures in parallel. I have 4 procedures that are to be run parallel. When the procedures are run successfully, now I need to go to the next task. These procedures create about 10 tables. Next task is to execute the set of procedures. I have made one table, where I describe the dependency of these procedures to the tables created above. After any one of the above procedures is completed, I should come to this set of procedures, and find out those procedures whose dependency tables are already created. If any procedure whose dependent tables creation is completed, I need to execute this procedure. Running 4 procedures parallel is done by dts. But, difficulty for me is to transfer the task from the above 4 procedures to the below set of procedures. Please help me to complete my task. Thanks in advance

    Read the article

  • GNU Make - Dependencies on non program code

    - by Tim Post
    A requirement for a program I am writing is that it must be able to trust a configuration file. To accomplish this, I am using several kinds of hashing algorithms to generate a hash of the file at compile time, this produces a header with the hashes as constants. Dependencies for this are pretty straight forward, my program depends on config_hash.h, which has a target that produces it. The makefile looks something like this : config_hash.h: $(SH) genhash config/config_file.cfg > $(srcdir)/config_hash.h $(PROGRAM): config_hash.h $(PROGRAM_DEPS) $(CC) ... ... ... I'm using the -M option to gcc, which is great for dealing with dependencies. If my header changes, my program is rebuilt. My problem is, I need to be able to tell if the config file has changed, so that config_hash.h is re-generated. I'm not quite sure how explain that kind of dependency to GNU make. I've tried listing config/config_file.cfg as a dependency for config_hash.h, and providing a .PHONY target for config_file.cfg without success. Obviously, I can't rely on the -M switch to gcc to help me here, since the config file is not a part of any object code. Any suggestions? Unfortunately, I can't post much of the Makefile, or I would have just posted the whole thing.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >