Search Results

Search found 4805 results on 193 pages for 'repository'.

Page 55/193 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Are DDD Aggregates really a good idea in a Web Application?

    - by Mystere Man
    I'm diving in to Domain Driven Design and some of the concepts i'm coming across make a lot of sense on the surface, but when I think about them more I have to wonder if that's really a good idea. The concept of Aggregates, for instance makes sense. You create small domains of ownership so that you don't have to deal with the entire domain model. However, when I think about this in the context of a web app, we're frequently hitting the database to pull back small subsets of data. For instance, a page may only list the number of orders, with links to click on to open the order and see its order id's. If i'm understanding Aggregates right, I would typically use the repository pattern to return an OrderAggregate that would contain the members GetAll, GetByID, Delete, and Save. Ok, that sounds good. But... If I call GetAll to list all my order's, it would seem to me that this pattern would require the entire list of aggregate information to be returned, complete orders, order lines, etc... When I only need a small subset of that information (just header information). Am I missing something? Or is there some level of optimization you would use here? I can't imagine that anyone would advocate returning entire aggregates of information when you don't need it. Certainly, one could create methods on your repository like GetOrderHeaders, but that seems to defeat the purpose of using a pattern like repository in the first place. Can anyone clarify this for me?

    Read the article

  • Oracle Solaris 11 How To Guides

    - by glynn
    Over the past year or so I've been writing a lot of How To Guides for different technologies. While we have really excellent product documentation (including the best set of manual pages available on any UNIX or Linux platform), the various How To Guides we have help to complement some of that learning, giving administrators a chance to learn the motivations for different technologies with a simple set of examples. Not only are they fun to research and write, they're also one of the more popular items on our Oracle Solaris 11 technology pages on OTN. So here's a link to bookmark and come back to on a regular basis: Oracle Solaris 11 How To Guides. We've got an excellent line up of articles there, and below is a list of the ones I've been involved in writing. Let us know if there are technologies that you think a How To Guide would help with and we'd be happy to get them onto our list! TitleLink Taking your First Steps with Oracle Solaris 11An introduction to installing Oracle Solaris 11, including the steps for installing new software and administering other system configuration. Introducing the basics of IPS on Oracle Solaris 11How to administer an Oracle Solaris 11 system using IPS, including how to deal with software package repositories, install and uninstall packages, and update systems. Advanced administration with IPS on Oracle Solaris 11Take a deeper look at advanced IPS to learn how to determine package dependencies, explore manifests, perform advanced searches, and analyze the state of your system. How to create and publish packages with IPS on Oracle Solaris 11How to create new software packages for Oracle Solaris 11 and publish them to a network package repository. How to update your Oracle Solaris 11 systems using Support Repository UpdatesThe steps for updating an Oracle Solaris 11 system with software packages provided by an active Oracle support agreement, plus how to ensure the update is successful and safe. Introducing the basics of SMF on Oracle Solaris 11Simple examples of administering services on Oracle Solaris 11 with the Service Management Facility. Advanced administration with SMF on Oracle Solaris 11Advanced administrative tasks with SMF, including an introduction to service manifests, understanding layering within the SMF configuration repository, and how best to apply configuration to a system.

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • Configuring Unity with a closed generic constructor parmater

    - by fearofawhackplanet
    I've been trying to read the article here but I still can't understand it. I have a constructor resembling the following: IOrderStore orders = new OrderStore(new Repository<Order>(new OrdersDataContext())); The constructor for OrderStore: public OrderStore(IRepository<Order> orderRepository) Constructor for Repository<T>: public Repository(DataContext dataContext) How do I set this up in the Unity config file? UPDATE: I've spent the last few hours banging my head against this, and although I'm not really any closer to getting it right I think at least I can be a little more specific about the problem. I've got my IRespository<T> working ok: <typeAlias alias="IRepository" type="MyAssembly.IRepository`1, MyAssembly" /> <typeAlias alias="Repository" type="MyAssembly.Repository`1, MyAssembly" /> <typeAlias alias="OrdersDataContext" type="MyAssembly.OrdersDataContext, MyAssembly" /> <types> <type type="OrdersDataContext"> <typeConfig> <constructor /> <!-- ensures paramaterless constructor used --> </typeConfig> </type> <type type="IRepository" mapTo="Repository"> <typeConfig> <constructor> <param name="dataContext" parameterType="OrdersDataContext"> <dependency /> </param> </constructor> </typeConfig> </type> </types> So now I can get an IRepository like so: IRepository rep = _container.Resolve(); and that all works fine. The problem now is when trying to add the configuration for IOrderStore <type type="IOrderStore" mapTo="OrderStore"> <typeConfig> <constructor> <param name="ordersRepository" parameterType="IRepository"> <dependency /> </param> </constructor> </typeConfig> </type> When I add this, Unity blows up when trying to load the config file. The error message is OrderStore does not have a constructor that takes the parameters (IRepository`1). What I think this is complaining about is because the OrderStore constructor takes a closed IRepository generic type, ie OrderStore(IRepository<Order>) and not OrderStore(IRepository<T>) I don't have any idea how to resolve this.

    Read the article

  • How to store wiki sites (vcs)

    - by Eugen
    Hello, as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion. Flat files I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki. On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine) Storing in a database By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts. Storing a vcs repository in a database field This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that. So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test

    Read the article

  • Structure of NAnt build scripts and solution structure on build server

    - by llykke
    We're in the process of streamlining/automating build, integration and unit testing as well as deployment. Our software is developed in Visual Studio where we have use both C# and VB.NET in our projects. A single project can be contained within multiple solutions (i.e. Utils project is used in both ProductA and ProductB solutions) For historical reasons our code repository isn't as well structured as one could have hoped for. E.g. Utils project might be located under ProductA solution (because that's were it was first used) but it was later deemed useful for productB development and merely just included into the solution of productB (but still located in a subdirectory of productA). I would like to use continous integration testing and have setup a CC.NET build server where I intend to use NAnt for creating the actual builds. Question 1: How should I structure my builds on the buildserver? Should I instruct CC.NET to retrieve all the projects for productB into a single library e.g. a file structure similar to -ProductB --Utils --BetterUtils --Data or should I opt for a filestructure similar to this -ProductA --Utils -ProductB --BetterUtils --Data and then just have the NAnt build scripts handle the references? Our references in VS doesn't match the actual location in the code repository so it's not possible today to just check-out productB solution and build it straight away (unfortunately). I hope this question makes sense? Question 2: Is it better to check out all the source code located in different projects into a single file folder (whilst retaining some kind of structure) and then build every thing at once or have multiple projects in CC.NET and then let the CC.NET server handle dependencies? Example: Should I have a seperate project in CC.NET for monitoring the automated build/test of Utils project when it's never released on it's own? Or should I just build/test it whilst building it as part of ProductB? I hope the above makes sense and that you can provide me with some arguments for using either option. We're nowhere near an ideal source code repository structure and I would prefer if I can resolve the lack of repository structure on the build server instead of having to clean up the structure of our repository. Switching away from VSS is (unfortunately) not an option. Right now our build consists of either deploying via VS clickonce or pressing F5 so just getting the build automated would be a huge step up for us. Thanks

    Read the article

  • Under what circumstances would a LINQ-to-SQL Entity "lose" a changed field?

    - by John Rudy
    I'm going nuts over what should be a very simple situation. In an ASP.NET MVC 2 app (not that I think this matters), I have an edit action which takes a very small entity and makes a few changes. The key portion (outside of error handling/security) looks like this: Todo t = Repository.GetTodoByID(todoID); UpdateModel(t); Repository.Save(); Todo is the very simple, small entity with the following fields: ID (primary key), FolderID (foreign key), PercentComplete, TodoText, IsDeleted and SaleEffortID (foreign key). Each of these obviously corresponds to a field in the database. When UpdateModel(t) is called, t does get correctly updated for all fields which have changed. When Repository.Save() is called, by the time the SQL is written out, FolderID reverts back to its original value. The complete code to Repository.Save(): public void Save() { myDataContext.SubmitChanges(); } myDataContext is an instance of the DataContext class created by the LINQ-to-SQL designer. Nothing custom has been done to this aside from adding some common interfaces to some of the entities. I've validated that the FolderID is getting lost before the call to Repository.Save() by logging out the generated SQL: UPDATE [Todo].[TD_TODO] SET [TD_PercentComplete] = @p4, [TD_TodoText] = @p5, [TD_IsDeleted] = @p6 WHERE ([TD_ID] = @p0) AND ([TD_TDF_ID] = @p1) AND /* Folder ID */ ([TD_PercentComplete] = @p2) AND ([TD_TodoText] = @p3) AND (NOT ([TD_IsDeleted] = 1)) AND ([TD_SE_ID] IS NULL) /* SaleEffort ID */ -- @p0: Input BigInt (Size = -1; Prec = 0; Scale = 0) [5] -- @p1: Input BigInt (Size = -1; Prec = 0; Scale = 0) [1] /* this SHOULD be 4 and in the update list */ -- @p2: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [90] -- @p3: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text] -- @p4: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [0] -- @p5: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text foo] -- @p6: Input Bit (Size = -1; Prec = 0; Scale = 0) [True] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 4.0.30319.1 So somewhere between UpdateModel(t) (where I've validated in the debugger that FolderID updated) and the output of this SQL, the FolderID reverts. The other fields all save. (Well, OK, I haven't validated SaleEffortID yet, because that subsystem isn't really ready yet, but everything else saves.) I've exhausted my own means of research on this: Does anyone know of conditions which would cause a partial entity reset (EG, something to do with long foreign keys?), and/or how to work around this?

    Read the article

  • Intermittent "Specified cast is invalid" with StructureMap injected data context

    - by FreshCode
    I am intermittently getting an System.InvalidCastException: Specified cast is not valid. error in my repository layer when performing an abstracted SELECT query mapped with LINQ. The error can't be caused by a mismatched database schema since it works intermittently and it's on my local dev machine. Could it be because StructureMap is caching the data context between page requests? If so, how do I tell StructureMap v2.6.1 to inject a new data context argument into my repository for each request? Update: I found this question which correlates my hunch that something was being re-used. Looks like I need to call Dispose on my injected data context. Not sure how I'm going to do this to all my repositories without copypasting a lot of code. Edit: These errors are popping up all over the place whenever I refresh my local machine too quickly. Doesn't look like it's happening on my remote deployment box, but I can't be sure. I changed all my repositories' StructureMap life cycles to HttpContextScoped() and the error persists. Code: public ActionResult Index() { // error happens here, which queries my page repository var page = _branchService.GetPage("welcome"); if (page != null) ViewData["Welcome"] = page.Body; ... } Repository: GetPage boils down to a filtered query mapping in my page repository. public IQueryable<Page> GetPages() { var pages = from p in _db.Pages let categories = GetPageCategories(p.PageId) let revisions = GetRevisions(p.PageId) select new Page { ID = p.PageId, UserID = p.UserId, Slug = p.Slug, Title = p.Title, Description = p.Description, Body = p.Text, Date = p.Date, IsPublished = p.IsPublished, Categories = new LazyList<Category>(categories), Revisions = new LazyList<PageRevision>(revisions) }; return pages; } where _db is an injected data context as an argument, stored in a private variable which I reuse for SELECT queries. Error: Specified cast is not valid. Exception Details: System.InvalidCastException: Specified cast is not valid. Stack Trace: [InvalidCastException: Specified cast is not valid.] System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) +4539 System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) +207 System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) +500 System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute(Expression expression) +50 System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) +383 Manager.Controllers.SiteController.Index() in C:\Projects\Manager\Manager\Controllers\SiteController.cs:68 lambda_method(Closure , ControllerBase , Object[] ) +79 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +258 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.<>c__DisplayClassd.<InvokeActionMethodWithFilters>b__a() +125 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) +640 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +312 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +709 System.Web.Mvc.Controller.ExecuteCore() +162 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__4() +58 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +20 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +453 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371

    Read the article

  • Error starting JBoss 5.1.0

    - by Alexandre
    I've installed JBoss 5.1.0 on a Xubuntu (running as a guest on VMWare - Windows 7 host). It did work fine for some days, but now I'm completelly unable to start it anymore. Every time I try to start it, I got a "Port 8x83 already in use". I've tried to run it with different ports configurations, and none of them works. I did look for the services using the problematic ports, using netstat and lsof, but they never show up. Since this error occurs in all port configurations, I think this is a Jboss problem. Below is the error stack trace: 2010-06-15 06:21:47,992 INFO [org.jboss.web.WebService] (main) Using RMI server codebase: http://192.168.0.104:8083/ 2010-06-15 06:21:48,085 ERROR [org.jboss.kernel.plugins.dependency.AbstractKernelController] (main) Error installing to Start: name=jboss:service=WebService state=Create mode=Manual requiredState=Installed java.lang.Exception: Port 8083 already in use. at org.jboss.web.WebServer.start(WebServer.java:233) at org.jboss.web.WebService.startService(WebService.java:322) at org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupport.java:376) at org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBeanSupport.java:322) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:189) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) Caused by: java.net.BindException: Cannot assign requested address at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:365) at java.net.ServerSocket.bind(ServerSocket.java:319) at java.net.ServerSocket.<init>(ServerSocket.java:185) at org.jboss.web.WebServer.start(WebServer.java:226) Any hint on this? Thanks

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • permission denied: /etc/apt/sources.list

    - by Eli
    I'm trying to install java jre, i usually do it like this sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 5CB26B26 sudo apt-get update sudo apt-get install update-sun-jre exit but when i do sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list i see permission denied: /etc/apt/sources.list When i do ls -l /etc/apt/sources.list i see -rw-r--r-- 1 root root 3360 Aug 26 01:45 /etc/apt/sources.list When i do sudo mv /etc/apt/sources.list /etc/apt/sources.list.old sudo cat /etc/apt/sources.list.old | sudo tee /etc/apt/sources.list i see #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise universe deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main and the issue is not solved, i still see that permission error, I'm on a 64 bit laptop

    Read the article

  • apt-get 403 Forbidden

    - by Lerp
    I've start a new job today and I am trying to set up my machine to run through their Windows server. I've managed to get a internet connection through the server now but now I can't run apt-get update as I get a "403 Forbidden" error. This is for every repo under my source list, apart from translations(?). I do have a proxy in apt.conf, if I don't have it I get a 407 Permission Denied error. Here's my apt.conf file (I have omitted my username and password) Acquire::http::proxy "http://username:[email protected]:8080/"; Here's my sources.list #deb cdrom:[Ubuntu 12.04.2 LTS _Precise Pangolin_ - Release amd64 (20130213)]/ dists/precise/main/binary-i386/ #deb cdrom:[Ubuntu 12.04.2 LTS _Precise Pangolin_ - Release amd64 (20130213)]/ dists/precise/restricted/binary-i386/ #deb cdrom:[Ubuntu 12.04.2 LTS _Precise Pangolin_ - Release amd64 (20130213)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise universe deb-src http://gb.archive.ubuntu.com/ubuntu/ precise universe deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ precise multiverse deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main I can sort-of fix this by changing all the http in sources.list to ftp but I still have issues with ppas

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Usinng svnadmin dump to revert the latest revision committed

    - by Wux
    What I need is that the latest (mistake) revision being reverted and that the repository does not store it in anyway. That is, I'm trying to erase the latest revision out of existence, NOT trying to fix things by coming back to the latest-1 revision. In other words, I want to avoid the repository growing in size. Suppose head revision is 100. I knew that the suggested answer is that svnadmin dump -r0:80 old-repo | svnadmin load --force-uuid new-repo. What I'm confusing myself about is why not svnadmin dump -r81:100 old-repo Why the first and not the second solution? I suppose svnadmin dump will erase the repository completely? And keeping only revision 0 - 80 in a dump file? Is my understanding of "taking a part out of the repository into a dump file" about svnadmin dump completely wrong? (That is revision 81 - 100 is still there) Sincere apologies if this has been asked. I did spend some time searching though no specific things about this were found. A topic link in case I miss it would be greatly appreciated.

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

  • Web crawler update strategy

    - by superb
    I want to crawl useful resource (like background picture .. ) from certain websites. It is not a hard job, especially with the help of some wonderful projects like scrapy. The problem here is I not only just want crawl this site ONE TIME. I also want to keep my crawl long running and crawl the updated resource. So I want to know is there any good strategy for a web crawler to get updated pages? Here's a coarse algorithm I've thought of. I divided the crawl process into rounds. Each round URL repository will give crawler a certain number (like , 10000) of URLs to crawl. And then next round. The detailed steps are: crawler add start URLs to URL repository crawler ask URL repository for at most N URL to crawl crawler fetch the URLs, and update certain information in URL repository, like the page content, the fetch time and whether the content has been changed. just go back to step 2 To further specify that, I still need to solve following question: How to decide the "refresh-ness" of a web page, which indicates the probability that this web page has been updated ? Since that is an open question, hopefully it will brought some fruitful discussion here.

    Read the article

  • Mercurial Tagging/Branching Strategy

    - by Tony Trozzo
    My current project is broken down into 3 parts: Website, Desktop Client, and a Plug-in for a third party program. We had started out originally with Subversion for our source control but decided to try Mercurial after reading Joel Spolsky's final post. Considering we haven't really used the majority of svn's potential before, we figured starting fresh with some basic ideas of how source control worked would make this transition easy. However, after setting up our initial repository, we're lost as to how tagging and branching should work on a project like this. Essentially, we're working on all 3 of these parts at the same time. We want a release to be a combination of the 3 parts. Currently we're working in one repository. For the Plug-in part, we have the first iteration finished which we've been referring to as Plug-In v0.1. For the first official build of the other two parts, we'd also like to refer to them as Website v0.1 and Desktop Client v0.1. When all three parts are at v0.1, we'd like to have a Full Project v0.1. Our problem is we're not sure how to manage all of this in the Hg repository. Would the best way to handle this be to create 3 separate repositories for the 3 stable versions and then 3 more repositories for the current developments? Currently we have this all in one repository. Should we do this in branches (are branches any different from cloning repositories?) and tags? Any help is greatly appreciated.

    Read the article

  • recent cvs cygwin newline windows problems. How to solve?

    - by user72150
    Hi all, One of our projects still uses CVS. Recently (sometime in the past year) the Cygwin cvs client (currently using 1.12.13) has given me problems when I update. I suspect the problem stems from windows/unix newline differences. It worked for years without a problem. Here are the symptoms. When I update from the command line, I see messages like these: "no such repository" messages /cvshome : no such repository False-"Modified" due to newline differences M java/com/foo/ekm/value/EkmContainerInstance.java False-"Conflicts" cvs update: Updating java/com/foo/ekm/value/XWiki cvs update: move away `java/com/foo/ekm/value/XWiki/XWikiContainerInstance.java'; it is in the way C java/com/foo/ekm/value/XWiki/XWikiContainerInstance.java Note the for the 'no such repository' error, I found that cd-ing into the 'CVS/' folder and running 'dos2unix Root' updates correctly. I'm not sure how this file (or Repository or Entries) gets whacked. I don't remember when these problems started. I have a workaround: updating from our IDE (Intellij Idea) always succeeds Yes, I know we should switch to (svn|git|mercurial), but does anyone know what causes these problems? when they were introduced? workarounds? thanks, bill

    Read the article

  • git push error 'remote rejected] master -> master (branch is currently checked out)'

    - by hap497
    Hi, Yesterday, I post a question regarding how to clone a git repository from 1 of my machine to another. http://stackoverflow.com/questions/2808177/how-can-i-git-clone-from-another-machine/2809612#2809612 I am able to successfully clone a git repository from my src (192.168.1.2) to my dest (192.168.1.1). But when I did an edit to a file and then do a 'git commit -a -m "test"' and then do a git push. I get this error on my dest (192.168.1.1): git push [email protected]'s password: Counting objects: 21, done. Compressing objects: 100% (11/11), done. Writing objects: 100% (11/11), 1010 bytes, done. Total 11 (delta 9), reused 0 (delta 0) error: refusing to update checked out branch: refs/heads/master error: By default, updating the current branch in a non-bare repository error: is denied, because it will make the index and work tree inconsistent error: with what you pushed, and will require 'git reset --hard' to match error: the work tree to HEAD. error: error: You can set 'receive.denyCurrentBranch' configuration variable to error: 'ignore' or 'warn' in the remote repository to allow pushing into error: its current branch; however, this is not recommended unless you error: arranged to update its work tree to match what you pushed in some error: other way. error: error: To squelch this message and still keep the default behaviour, set error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To git+ssh://[email protected]/media/LINUXDATA/working ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'git+ssh://[email protected]/media/LINUXDATA/working' I have 2 version of git, will that causes this problem? I have git 1.7 on 192.168.1.2 (src) but git 1.5 on 192.168.1.1 (dest). I appreciate if someone can help me with this. Thank you.

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • DDD principlers and ASP.NET MVC project design

    - by kaivalya
    Two part questions I have a product aggregate that has; Prices PackagingOptions ProductDescriptions ProductImages etc I have modeled one product repository and did not create individual repositories for any of the child classes. All db operations are handled through product repository. Am I understanding the DDD concept correctly so far? Sometimes the question comes to my mind that having a repository for lets say packaging options could make my life easier by directly fetching a the packaging option from the DB by using its ID instead of asking the product repository to find it in its PackagingOptions collection and give it to me.. Second part is managing the edit create operations using ASP.MVC frame work I am currently trying to manage all add edit remove of these child collections of product through product controller(sound right?). One challenge I am now facing is; If I edit a specific packaging option of product through mydomain/product/editpackagingoption/10 I have access to the id of the packaging option But I don't have the ID of the product it self and this forces me to write a query to first find the product that has this specific packaging option then edit that product and the revelant packaging option. I can do this as all packaging option have their unique ID but this would fail if I have collections that don't have unique ID. That feels very wrong.. The next option I thought of is sending both the product and packaging option IDs on the url like; mydomain/product/editpackagingoption/3/10 But I am not sure if that is a good design either. So I am at a point that I am a bit confused. might be having fundamental misunderstandings around all of this... I would appreciate if you bear with the long question and help me put this together. thanks!

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >