Search Results

Search found 1634 results on 66 pages for 'ddd repositories'.

Page 25/66 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • IIS 7 rewriting subdomain to point at a specific port.

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you write access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS 7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite>

    Read the article

  • Upgrade Centos 5 tot PHP 5.2 or 5.3 [recommended way?]

    - by solid
    We are using Zend Framework and in version 2, php 5.2 will be the minimum requirement. We love CentOS and we'd like to keep using it, but PHP 5.1 just won't do anymore when developing web applications with Zend framework. I found several links to solutions to upgrade with external repositories. http://serverfault.com/questions/106801/recommended-method-to-upgrade-php-5-1-6-to-5-2-x-on-centos-5 http://www.webtatic.com/blog/2009/05/installing-php-526-on-centos-5/ http://www.webtatic.com/blog/2009/06/php-530-on-centos-5/ We'd like to see another solution with the use of an "official?" CentOS repository if any is available. We only need to upgrade PHP, the rest of the CentOS setup is fine the way it is. For us, it's important however to keep the YUM cycle intact using the normal repositories. So in short: is it even possible to upgrade only PHP by using an external repo or otherwise? While still upgrading all our other packages safely through normal yum usage? Thanks for your help!

    Read the article

  • How can I install git on RHEL 6?

    - by JR.Xyza
    I'm trying to install Git on a RHEL6 development server, I have experience with Ubuntu but this is my first time working with RHEL (I'm a developer trying to fill in for a recently departed Linux Sysadmin). I've set up two additional repos (EPEL and IUS) for other packages needed for a Magento install. Output of yum repolist: [root@box]# yum repolist Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,841 ius IUS for RHEL 6Server - x86_64 135 Most of what I've read indicates a simple 'yum install git' should work with EPEL enabled, but I get the dreaded [root@box]# yum install git Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. Setting up Install Process No package git available. Error: Nothing to do Same goes for git-daemon, etc. I've tracked down a number of git RPMs such as this one at repoforge but they require a train of dependencies that seems to never end. I've also toyed with compiling it manually but the rabbit hole to get make working seems to go even deeper. I'm convinced there's a simple oversight somewhere keeping me from being able to install from the EPEL repo, but I'm a rookie at all this. Thanks in advance for help/pointers/additional resources.

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • How to customise search core results web part Part1

    - by ybbest
    In this post, I’d like to show you how to customise search core results web part. It is a quite simple, most of the times what you need to do is to change the xslt to perform the changes. Here are the steps: 1. You need to change the xslt to the following, so that you can see the raw xml. <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" > <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" /> <xsl:template match="/"> <xmp><xsl:copy-of select="*"/></xmp> </xsl:template> </xsl:stylesheet> a. To do so , you need to go to edit page>>Edit search core results web part >>Display Properties and then untick use Location Visualization b. Open the xslt editor and copy the existing XSLT code to your preferred xslt editor so that you can customise it. c. Now you can paste in the XSLT code above. 2.Perform the search after you have completed step1 and you will see the search results returned in raw xml <All_Results> <Result> <id>1</id> <workid>678</workid> <rank>100000000</rank> <title>Ybbest</title> <author></author> <size>137531</size> <url>http://ybbest</url> <urlEncoded>http%3A%2F%2Fybbest</urlEncoded> <description>Ybbest test site</description> <write>3/17/2012</write> <sitename>http://ybbest</sitename> <collapsingstatus>0</collapsingstatus> <hithighlightedsummary> <c0>Ybbest</c0> test site <ddd /> Add a new image, change this welcome text or add new lists to this page by clicking the edit button above. You can click on Shared Documents to add files or on the <ddd /> </hithighlightedsummary> <hithighlightedproperties> <HHTitle> <c0>Ybbest</c0> </HHTitle> <HHUrl>http://<c0>ybbest</c0></HHUrl> </hithighlightedproperties> <contentclass>STS_Site</contentclass> <isdocument>False</isdocument> <picturethumbnailurl></picturethumbnailurl> <popularsocialtags /> <picturewidth>0</picturewidth> <pictureheight>0</pictureheight> <datepicturetaken></datepicturetaken> <serverredirectedurl></serverredirectedurl> <fileextension></fileextension> <ows_metadatafacetinfo></ows_metadatafacetinfo> <imageurl imageurldescription="SharePoint Site Collection">/_layouts/images/siteicon_16x16.png</imageurl> </Result> <TotalResults>69</TotalResults> <NumberOfResults>50</NumberOfResults> </All_Results> 3. Then you can read what has been returned in the raw xml and start modifying the xslt to customise your search results page. 4.You can also link an external xslt to the web part.It can be set in the Miscellaneous of Web Part section. You can also set it pragmatically using a feature receiver , you can download the source code to do so here. References: http://stackoverflow.com/questions/6548104/change-xslt-of-the-searchresultwebpart-during-the-featureactivated http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/04/05/a-quick-guide-to-coreresultswebpart-configuration-changes-in-sharepoint-2010.aspx http://www.tonytestasworld.com/post/2011/01/30/HowTo-display-SharePoint-Search-results-as-raw-XML.aspx

    Read the article

  • Thanks to .Net Developers Network in Bristol - Hyper-V for Developers slides not available for downl

    - by Liam Westley
    Thanks to the guys at .Net Developers Network (http://www.dotnetdevnet.com) for inviting me down to Bristol to present on Hyper-V for Developers.  There were some great questions and genuine interest, especially surprising for a topic that often has a soporific effect on developers. You can download the original PowerPoint file or the PDF complete with speaker notes from here, http://www.tigernews.co.uk/blog-twickers/dotnetdevnet/HyperV4Devs-PPT.zip http://www.tigernews.co.uk/blog-twickers/dotnetdevnet/HyperV4Devs-PDF.zip I should be back for DDD SouthWest (http://www.dddsouthwest.com).  You can get voting from Monday 29th March 2010, and for a change my proposed topic is not about virtualisation! Finally, apologies to Guy Smith-Ferrier for dragging him away from the Bristol Girl Geek Dinners (http://bristolgirlgeekdinners.com) crew so I could catch my train back to London.

    Read the article

  • Where do we put "asking the world" code when we separate computation from side effects?

    - by Alexey
    According to Command-Query Separation principle, as well as Thinking in Data and DDD with Clojure presentations one should separate side effects (modifying the world) from computations and decisions, so that it would be easier to understand and test both parts. This leaves an unanswered question: where relatively to the boundary should we put "asking the world"? On the one hand, requesting data from external systems (like database, extental services' APIs etc) is not referentially transparent and thus should not sit together with pure computational and decision making code. On the other hand, it's problematic, or maybe impossible to tease them apart from computational part and pass it as an argument as because we may not know in advance which data we may need to request.

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Hyper-V for Developers - presentation from London .NET Users and VBUG Bracknell

    - by Liam Westley
    Thanks to both London .NET User group and VBUG Bracknell for allowing me to present my Hyper-V for Developers talk last week.  A weekend at DDD Scotland followed by two user group presentations means I'm a bit late getting the presentations uploaded to the blog, so many apologies if you've been waiting.   LDNUG - www.tigernews.co.uk/blog-twickers/LDNUG-HyperV4Devs.zip   VBUG - www.tigernews.co.uk/blog-twickers/VBUG-HyperV4Devs.zip Also, at VBUG Bracknell I was asked if you could configure a Hyper-V server to user wireless networking (which might be useful if you have a laptop for demonstrations).  Well here's the post from Ben Armstrong (Virtual PC Guy) which details how that can be configured,   http://blogs.msdn.com/virtual_pc_guy/archive/2008/01/09/using-hyper-v-with-a-wireless-network-adapter.aspx ... and it's also detailed on the TechNet wiki as part of running Hyper-V on a laptop,   social.technet.microsoft.com/wiki/contents/articles/hyper-v-how-to-run-hyper-v-on-a-laptop.aspx

    Read the article

  • Microsoft Async CTP for DDD9 UK Developer Conference - slides and source code now available

    - by Liam Westley
    Thanks to all the nice comments from people who attended my presentation at DDD9, and extra thanks to Jon Skeet, Mark Rendle and Mike Hadlow for coming on stage for the last ten minutes to help debate whether the Async CTP is the correct way to go to enhance C# 5.0. The presentation is available at Prezi.com http://prezi.com/gysz5nohltye, which I can recommend as a refreshing change to the more standard PowerPoint slidedecks. I've also uploaded all the code samples into a single ZIP file. You will need to install the Async CTP to be able to run them, and I would remind everyone that the current Async CTP is not compatible with either ASP.NET MVC 3 RTM or Visual Studio 2010 Service Pack 1 so you may need to use a test system of virtual machine to play with it! Source code - http://www.tigernews.co.uk/blog-twickers/ddd9/AsyncSrc.zip Again, thanks for all the positive feedback and the whole of the DDD team for putting on a fantastic conference for all the presenters and delegates.

    Read the article

  • Java EE@Princeton Java Meetup

    - by reza_rahman
    On November 28th, I spoke at the Princeton Java Meetup Group. It's a well-organized group led by veteran Java champion Yakov Fain - I have spoken there numerous times. I did my Java EE 6 DDD talk (the same one from Java2Days 2012). Domain Driven Design with Java EE 6 from Reza Rahman The code examples are available here: https://blogs.oracle.com/reza/resource/dddsample.zip. Give me a shout if you would like to get it up and running. The talk went very well -- the official RSVP shows 33 attended. I gave away a few GlassFish T-shirts, laptop stickers and Arun Gupta's Java EE 6 pocket guide. More details on the talk here. I most certainly look forward to speaking there again.

    Read the article

  • Best way to find freelance c# developer

    - by sturdytree
    I am a developer based in London developing a desktop WPF business application using c# (+ MVVM/DDD/Nhibernate/Unity/Postsharp...). I need to find someone to review my code to suggest improvements and then to write integration tests. This could potentially be an ongoing relationship, expanding into writing new features etc. It would suit an experienced developer with a strong interest in design principles and writing clean code, who can work part time. However, the developer would need to work at my home based office (although timing is flexible). Websites such as elance and odesk seem to tie you in, and I would prefer to pay a one off introduction fee and then deal with the developer direct. Stack exchange seems quite expensive but is a possibility if I don't find an alternative. So, are there any other websites I can try?

    Read the article

  • [News] La communaut? ALT.NET est-elle sur le d?clin ?

    La communaut? ALT.NET s'est ?rig? comme symbole d'une bataille contre le d?veloppement ? quick and dirty ? pr?n? ? l??poque par Microsoft (Dataset, applications monolithiques, ?). S?appuyant sur les concepts de l'agilit? (tests unitaires, le mapping O/R, DDD et n-tiers), le mouvement a eu un ?cho important au d?but et semble un peu s'essouffler sur la dur?e. Dans ce billet, Ian Cooper entame une r?flexion de fond sur l'int?r?t et l'utilit? de cette communaut?. D'autant plus que Microsoft a depuis largement fait le m?nage dans sa culture.

    Read the article

  • Should I limit my type name suffix vocabulary when using OOP?

    - by Den
    My co-workers tend to think that it is better to limit non-domain type suffixes to a small fixed set of OOP-pattern inspired words, e.g.: *Service *Repository *Factory *Manager *Provider I believe there is no reason to not extend that set with more names, e.g. (some "translation" to the previous vocabulary is given in brackets): *Distributor (= *DistributionManager or *SendingService) *Generator *Browser (= *ReadonlyRepositoryService) *Processor *Manipulator (= *StateMachineManager) *Enricher (= *EnrichmentService) (*) denotes some domain word, e.g. "Order", "Student", "Item" etc. The domain is probably not complex enough to use specialized approaches such as DDD which could drive the naming.

    Read the article

  • Is Domain Driven Design useful / productive for not so complex domains?

    - by Elijah
    When assessing a potential project at work, I suggested that it might be advantageous to use a domain driven design approach to its object model. The project does not have an excessively complex domain, so my coworker threw this at me: It has been said, that DDD is favorable in instances where there is a complex domain model (“...It applies whenever we are operating in a complex, intricate domain” Eric Evans). What I'm lost on is - how you define the complexity of a domain? Can it be defined by the number of aggregate roots in the domain model? Is the complexity of a domain in the interaction of objects? The domain that we are assessing is related online publishing and content management.

    Read the article

  • FFmpeg multi pass encoding

    - by Levan
    Sorry, I am really new to this, and have problems doing some tasks without help. So I have a terminal command: ffmpeg \ -y \ -i '/media/levan/BEEA60D8EA608E89/Downloads/Videos/Tony Braxton - Un-Break My Heart.VOB' \ -s 1920x1080 \ -aspect 16:9 \ -r 25 \ -b 15550k \ -bt 19792k \ -vcodec libtheora \ -acodec libvorbis \ -ac 2 \ -ar 48000 \ -ab 320k \ ddd.ogg and I want to have 3 pass video in output video, but how do I accomplish this? I found that I must write -pass n command some where, but where to write it I do not know. I tested this and wrote -pass 3 at the end but then the terminal just showed a > symbol.

    Read the article

  • Updating query results

    - by Francisco Garcia
    Within a DDD and CQRS context, a query result is displayed as table rows. Whenever new rows are inserted or deleted, their positions must be calculated by comparing the previous query result with the most recent one. This is needed to visualize with an animation new or deleted rows. The model of my view contains an array of the displayed query results. But I need a place to compare its contents against the latest query. Right now I consider my model view part of my application layer, but the comparison of two query result sets seems something that must be done within the domain layer. Which component should cache a query result and which one compare them? Are view models (and their cached contents) supposed to be in the application layer?

    Read the article

  • Domain Model and Querying

    - by Tyrsius
    I am new to DDD, having worked only in Transaction-Script apps with an anemic model, or just Big Balls of Mud, so please forgive any terminology I abuse. I am trying to understand the proper separation between the domain model and the repository. What is the proper way to construct a domain object that is coming from a database, assuming the (incredibly simplified) need to query for objects by status (returns enumerable), or by ID. Should a factory be building the objects, exposing methods for GetByStatus() and GetByID(), using a DIed repository? Should a repository be called directly, knowing how to build a domain model from the DTO? Should the domain model have a constructor for get by ID, using a DIed repoistory to load the initial state, using some other (?) method for the list? I am not really sure what the best way would be, and this question has an answer advocating each one (these are certainly mutuallu exclusive).

    Read the article

  • Hosting Mercurial on IIS7

    - by Lasse V. Karlsen
    Note, this might perhaps be best suited on serverfault.com, but since it is about hosting a programmer source code repository, I am not entirely sure. I'm posting here first, trusting that it'll be migrated if necessary. I'm attempting to host clones of my Mercurial repositories on my own server (I have the main repo somewhere else), and I'm attempting to set up Mercurial under IIS. I followed the guide here, but I get an error message. Solved: See bottom of this question for details. The error message is: mercurial.error.RepoError: repository /path/to/repo/or/config not found Here's what I did. I installed Mercurial 1.5.2 I created c:\inetpub\hg I downloaded the hg source as per the instructions of the webpage, and copied the hgweb.cgi file into c:\inetpub\hg (note, the webpage says hgwebdir.cgi, but this particular file does not exist, hgweb.cgi does, however, can this be the source of the problem?) I added a hgweb.config, with the following contents: [paths] repo1 = C:/hg/** [web] style = monoblue I created c:\hg, created a sub-directory test, and created a repository inside it I installed python 2.6.5, latest 2.6 version from the website (the webpage mentions I need to install the correct version or I'll get a specific error message, since I don't get an error message that looks remotely like the one mentioned, I assume that 2.6.5 is not the problem) I added a new virtual host hg.vkarlsen.no, pointing it to c:\inetpub\hg For this host, I added a script mapping under the Handler Mappings section, mapping *.cgi to c:\python26\python.exe -u %s %s as per the instructions on the website. I then tested it by navigating to http://hg.vkarlsen.no/hgweb.cgi, but I get an error message. To make it easier to test, I dropped to a command prompt, navigated to c:\inetpub\hg, and executed the following command (error message is part of the text below): C:\inetpub\hg>c:\python26\python.exe -u hgweb.cgi Traceback (most recent call last): File "hgweb.cgi", line 16, in <module> application = hgweb(config) File "mercurial\hgweb\__init__.pyc", line 12, in hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 30, in __init__ File "mercurial\hg.pyc", line 82, in repository File "mercurial\localrepo.pyc", line 2221, in instance File "mercurial\localrepo.pyc", line 62, in __init__ mercurial.error.RepoError: repository /path/to/repo/or/config not found Does anyone know what I need to look at in order to fix this? Edit: Ok, I think I managed to get one step closer to the solution, but I'm still stumped. I realized the .cgi file is a python script file, and not something compile, so I opened it for editing, and these lines was sitting in it: # Path to repo or hgweb config to serve (see 'hg help hgweb') config = "/path/to/repo/or/config" So this was my source for the specific error message. If I change the line to this: config = "c:\\hg\\test" Then I can navigate the empty repository through the Mercurial web interface. However, I want to host multiple repositories, and seeing as the line says that I can also link to a hgweb config file, I tried this: config = "c:\\inetpub\\hg\\hgweb.config" But then I get the following error message: mercurial.error.Abort: c:\inetpub\hg\hgweb.config: not a Mercurial bundle file Exception ImportError: 'No module named shutil' in <bound method bundlerepository.__del__ of <mercurial.bundlerepo.bundlerepository object at 0x0260A110>> ignored Nothing I've tried for the config variable seems to work: config = "hgweb.config" config = "c:\\hg\\hgweb.config" various other variations I don't remember. So, still stumped, pointers anyone? Solved: I ended up having to edit the hgweb.cgi file: from: from mercurial.hgweb import hgweb, wsgicgi application = hgweb(config) to: from mercurial.hgweb import hgweb, hgwebdir, wsgicgi application = hgwebdir(config) Note the added hgwebdir parts there. Here's my hgweb.config file, located in the same directory as hgweb.cgi file: [collections] C:/hg/ = C:/hg/ [web] style = gitweb This now serves my repositories successfully. Hopefully this question will give others some information if they're stumped as I was.

    Read the article

  • New project created with Flex Mojo's archetype throws Cannot Find Parent Project-Maven Exception

    - by ignorant
    This is probably a silly question but I just cant seem to figure out. I'm completely new to flex and maven. Maven 2.2.1: Maven 2.2.1 unzipped,M2_HOME set and repository altered to point to different drive location in settings.xml Flex 4.0: Installed Created a multi-modular webapp project using flexmojo: mvn archetype:generate -DarchetypeRepository=http://repository.sonatype.org/content/groups/flexgroup -DarchetypeGroupId=org.sonatype.flexmojos -DarchetypeArtifactId=flexmojos-archetypes-modular-webapp -DarchetypeVersion=RELEASE with following options groupId=com.test artifactId=test version=1.0-snapshot package=com.tests * Creates * test |-- pom.xml |--swc -pom.xml |--swf -pom.xml `--war -pom.xml Parent pom has swc, swf, war as modules. Dependency is war-swf-swc. With parent artifactId of swf, swc, war set to swf, swc, test respectively. On executing mvn on test folder(for that matter clean or anything) I get this following error. G:\Projects\testmvn -e + Error stacktraces are turned on. [INFO] Scanning for projects... Downloading: http://repo1.maven.org/maven2/com/test/swc/1.0-snapshot/swc-1.0-snapshot.pom [INFO] Unable to find resource 'com.test:swc:pom:1.0-snapshot' in repository central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. GroupId: com.test ArtifactId: swc Version: 1.0-snapshot Reason: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.reactor.MavenExecutionException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:404) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:272) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(DefaultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromSourceFileInternal(DefaultMavenProjectBuilder.java:508) at org.apache.maven.project.DefaultMavenProjectBuilder.build(DefaultMavenProjectBuilder.java:200) at org.apache.maven.DefaultMaven.getProject(DefaultMaven.java:604) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:487) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:560) at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:391) ... 12 more Caused by: org.apache.maven.project.ProjectBuildingException: POM 'com.test:swc' not found in repository: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) for project com.test:swc at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:605) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1392) ... 19 more Caused by: org.apache.maven.artifact.resolver.ArtifactNotFoundException: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 20 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 22 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Tue Jun 15 19:22:15 GMT+02:00 2010 [INFO] Final Memory: 1M/2M [INFO] ------------------------------------------------------------------------ Looks like its trying to download the project from maven's central repository instead of building it. What am I missing?

    Read the article

  • How to Browse Without a Trace with an Ubuntu Live CD

    - by Trevor Bekolay
    No matter how diligently you clear your cache and erase your history, web browsing leaves traces on your computer. If you need keep your browsing private, then an Ubuntu Live CD is the answer. The key to this trick is that the Live CD environment runs completely in RAM, so things like your cache, cookies, and history don’t get saved to a persistent storage location. On a hard drive, even deleted files can be recovered, but once a computer is turned off the data stored in RAM is unrecoverable. In addition, since the Ubuntu Live CD environment is the same no matter what computer you use it on, there’s very little identifying information that a website can use to track you! The first step is to either burn an Ubuntu Live CD, or prepare a non-persistent Ubuntu USB flash drive. Ubuntu treats non-persistent flash drives like CDs, so files will not be written to it, but if you’re paranoid, then using a physical CD ensures that nothing gets written to a storage device. Boot up from the CD or flash drive, and choose to Run Ubuntu from the CD or flash drive if prompted (for more detailed instructions on booting from a CD or USB drive, see this article, or our guide on booting from a flash drive even if your BIOS won’t let you). Once the graphical Ubuntu environment comes up, you can click on the Firefox icon at the top of the screen to start browsing. If your browsing requires Flash, then you can install it by clicking on System at the top-left of the screen, then Administration > Synaptic Package Manager. Click on Settings at the top of the Synaptic window, and then select Repositories. Add a check in the checkbox with the label ending in “multiverse”. Click Close. Click the Reload button in the main Synaptic window. The list of available packages will reload. When they’ve reloaded, type “restricted” in the Quick search box. Right-click on ubuntu-restricted-extras and select Mark for Installation. It will note a number of other packages that will be installed. This list includes audio and video codecs, so after installing these, you should be able to play downloaded movies and songs. Click Mark to accept the installation of these other packages. Once you return to the main Synaptic window, click the Apply button and go through the dialogs to finish the installation of Flash and the other useful packages. If you open up Firefox now, you’ll have no problems using websites that use Flash. When you’re done browsing and shut down or restart your computer, all traces of your web browsing will be gone. It’s a bit of work compared to just using a privacy-centric browser, but if it’s very important that your browsing leave no traces on your hard drive, an Ubuntu Live CD is your best bet. Download Ubuntu Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDAdding extra Repositories on UbuntuHow to Add a Program to the Ubuntu Startup List (After Login)How to install Spotify in Ubuntu 9.10 using WineInstalling PHP4 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • How to I do install DB2 ODBC?

    - by Justin
    I have been trying, with no success, to install a IBM DB2 ODBC driver so that my PHP server can connect to a database. I've tried installing the db2_connect and get all sorts of problems, I tried install I Access for Linux and the RPM did not install right nor did using alien breed any useful results. I've also tried the DB2 Runtime v8.1, no success. If I attempt to run the rpm it claims I need dependencies that I can't find in apt-get. Yum is also not very helpful as it appears I don't have any repositories installed or lists... Running the simple RPM gives me this result in terminal: # rpm -ivh iSeriesAccess-7.1.0-1.0.x86_64.rpm rpm: RPM should not be used directly install RPM packages, use Alien instead! rpm: However assuming you know what you are doing... error: Failed dependencies: /bin/ln is needed by iSeriesAccess-7.1.0-1.0.x86_64 /sbin/ldconfig is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/rm is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/sh is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libgcc_s.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbcinst.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbc.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.3.2)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(CXXABI_1.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(GLIBCXX_3.4)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 Using alien and running the dkpg gives me thes headaque: $ alien iSeriesAccess-7.1.0-1.0.x86_64.rpm --scripts # dpkg -i iseriesaccess_7.1.0-2_amd64.deb (Reading database ... 127664 files and directories currently installed.) Preparing to replace iseriesaccess 7.1.0-2 (using iseriesaccess_7.1.0-2_amd64.deb) ... Unpacking replacement iseriesaccess ... post uninstall processing for iSeriesAccess 1.0...upgrade /var/lib/dpkg/info/iseriesaccess.postrm: line 8: [: upgrade: integer expression expected Setting up iseriesaccess (7.1.0-2) ... post install processing for iSeriesAccess 1.0...configure iSeries Access ODBC Driver has been deleted (if it existed at all) because its usage count became zero odbcinst: Driver installed. Usage count increased to 1. Target directory is /etc odbcinst: Driver installed. Usage count increased to 3. Target directory is /etc Processing triggers for libc-bin ... ldconfig deferred processing now taking place So it seems the files installed right, well my odbc driver shows up but db2cli.ini is no where to be found. So several questions. Is there a better alternative to connect php to db2, say an ubuntu package I can just install? Can someone direct me to the steps that makes my ubuntu server works well with the RPM so I can build my db2 instance? Also remember I'm connection to an I Series remotely. I'm not using the DB2 Express C thing, even if I did try it to get the db2 php functions to work. And I don't have zend but I think I have every other package on the ubuntu repositories. Help, thank you!

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >