Search Results

Search found 13675 results on 547 pages for 'online repository'.

Page 106/547 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Making your WCF Web Apis to speak in multiple languages

    - by cibrax
    One of the key aspects of how the web works today is content negotiation. The idea of content negotiation is based on the fact that a single resource can have multiple representations, so user agents (or clients) and servers can work together to chose one of them. The http specification defines several “Accept” headers that a client can use to negotiate content with a server, and among all those, there is one for restricting the set of natural languages that are preferred as a response to a request, “Accept-Language”. For example, a client can specify “es” in this header for specifying that he prefers to receive the content in spanish or “en” in english. However, there are certain scenarios where the “Accept-Language” header is just not enough, and you might want to have a way to pass the “accepted” language as part of the resource url as an extension. For example, http://localhost/ProductCatalog/Products/1.es” returns all the descriptions for the product with id “1” in spanish. This is useful for scenarios in which you want to embed the link somewhere, such a document, an email or a page.  Supporting both scenarios, the header and the url extension, is really simple in the new WCF programming model. You only need to provide a processor implementation for any of them. Let’s say I have a resource implementation as part of a product catalog I want to expose with the WCF web apis. [ServiceContract][Export]public class ProductResource{ IProductRepository repository;  [ImportingConstructor] public ProductResource(IProductRepository repository) { this.repository = repository; }  [WebGet(UriTemplate = "{id}")] public Product Get(string id, HttpResponseMessage response) { var product = repository.GetById(int.Parse(id)); if (product == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent(Messages.OrderNotFound); }  return product; }} The Get method implementation in this resource assumes the desired culture will be attached to the current thread (Thread.CurrentThread.Culture). Another option is to pass the desired culture as an additional argument in the method, so my processor implementation will handle both options. This method is also using an auto-generated class for handling string resources, Messages, which is available in the different cultures that the service implementation supports. For example, Messages.resx contains “OrderNotFound”: “Order Not Found” Messages.es.resx contains “OrderNotFound”: “No se encontro orden” The processor implementation bellow tackles the first scenario, in which the desired language is passed as part of the “Accept-Language” header. public class CultureProcessor : Processor<HttpRequestMessage, CultureInfo>{ string defaultLanguage = null;  public CultureProcessor(string defaultLanguage = "en") { this.defaultLanguage = defaultLanguage; this.InArguments[0].Name = HttpPipelineFormatter.ArgumentHttpRequestMessage; this.OutArguments[0].Name = "culture"; }  public override ProcessorResult<CultureInfo> OnExecute(HttpRequestMessage request) { CultureInfo culture = null; if (request.Headers.AcceptLanguage.Count > 0) { var language = request.Headers.AcceptLanguage.First().Value; culture = new CultureInfo(language); } else { culture = new CultureInfo(defaultLanguage); }  Thread.CurrentThread.CurrentCulture = culture; Messages.Culture = culture;  return new ProcessorResult<CultureInfo> { Output = culture }; }}   As you can see, the processor initializes a new CultureInfo instance with the value provided in the “Accept-Language” header, and set that instance to the current thread and the auto-generated resource class with all the messages. In addition, the CultureInfo instance is returned as an output argument called “culture”, making possible to receive that argument in any method implementation   The following code shows the implementation of the processor for handling languages as url extensions.   public class CultureExtensionProcessor : Processor<HttpRequestMessage, Uri>{ public CultureExtensionProcessor() { this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri; }  public override ProcessorResult<Uri> OnExecute(HttpRequestMessage httpRequestMessage) { var requestUri = httpRequestMessage.RequestUri.OriginalString;  var extensionPosition = requestUri.LastIndexOf(".");  if (extensionPosition > -1) { var extension = requestUri.Substring(extensionPosition + 1);  var query = httpRequestMessage.RequestUri.Query;  requestUri = string.Format("{0}?{1}", requestUri.Substring(0, extensionPosition), query); ;  var uri = new Uri(requestUri);  httpRequestMessage.Headers.AcceptLanguage.Clear();  httpRequestMessage.Headers.AcceptLanguage.Add(new StringWithQualityHeaderValue(extension));  var result = new ProcessorResult<Uri>();  result.Output = uri;  return result; }  return new ProcessorResult<Uri>(); }} The last step is to inject both processors as part of the service configuration as it is shown bellow, public void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode){ processors.Insert(0, new CultureExtensionProcessor()); processors.Add(new CultureProcessor());} Once you configured the two processors in the pipeline, your service will start speaking different languages :). Note: Url extensions don’t seem to be working in the current bits when you are using Url extensions in a base address. As far as I could see, ASP.NET intercepts the request first and tries to route the request to a registered ASP.NET Http Handler with that extension. For example, “http://localhost/ProductCatalog/products.es” does not work, but “http://localhost/ProductCatalog/products/1.es” does.

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • embedded glassfish: java.lang.NoClassDefFoundError: java/util/ServiceLoader

    - by Xinus
    I am trying to embed glassfish inside my java program using embeded api, I am using maven2 and its pom.xml is as follows <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>orh.highmark</groupId> <artifactId>glassfish-test1</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>glassfish-test1</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.glassfish.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>3.1-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>maven2-repository.dev.java.net</id> <name>Java.net Repository for Maven</name> <url>http://download.java.net/maven/2/</url> <layout>default</layout> </repository> <repository> <id>glassfish-repository</id> <name>GlassFish Nexus Repository</name> <url>http://maven.glassfish.org/content/groups/glassfish</url> </repository> </repositories> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> <mainClass>orh.highmark.App</mainClass> <arguments> <argument>argument1</argument> </arguments> </configuration> </plugin> </plugins> </build> </project> Program: public class App { public static void main( String[] args ) { System.out.println( "Hello World!" ); Server.Builder builder = new Server.Builder("test"); builder.logger(true); Server server = builder.build(); } } But for some reason it always gives me error as java.lang.NoClassDefFoundError: java/util/ServiceLoader here is the output C:\Users\sunils\glassfish-tests\glassfish-test1>mvn -e exec:java + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building glassfish-test1 [INFO] task-segment: [exec:java] [INFO] ------------------------------------------------------------------------ [INFO] Preparing exec:java [INFO] No goals needed for project - skipping [INFO] [exec:java {execution: default-cli}] Hello World! [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An exception occured while executing the Java class. null java/util/ServiceLoader [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An exception occured whi le executing the Java class. null at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An exception occured while executing the Java class. null at org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:345) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:290) at java.lang.Thread.run(Thread.java:595) Caused by: java.lang.NoClassDefFoundError: java/util/ServiceLoader at org.glassfish.api.embedded.Server.getMain(Server.java:701) at org.glassfish.api.embedded.Server.<init>(Server.java:290) at org.glassfish.api.embedded.Server.<init>(Server.java:75) at org.glassfish.api.embedded.Server$Builder.build(Server.java:185) at org.glassfish.api.embedded.Server$Builder.build(Server.java:167) at orh.highmark.App.main(App.java:14) ... 6 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Sat May 08 02:55:03 IST 2010 [INFO] Final Memory: 3M/6M [INFO] ------------------------------------------------------------------------ I couldn't guess its problem with my program or with the glassfish api. Can somebody please help me understand what is happening here and how to rectify it? thanks for any clue ..

    Read the article

  • Resolving SNAPSHOT dependencies with timestamps from Ivy

    - by bradhouse
    I am attempting to resolve timestamped SNAPSHOT dependencies with Ivy. The environment is Ant + Ivy 1.2.0 + Archiva. Archiva itself is populated from Maven2 builds. Ivy is only used to resolve dependencies (from a single, non Maven2 project). How can Ivy be configured to correctly resolve timestamped artifacts from an Archiva or m2 repository? For reference my current configuration is: ivysettings.xml looks similar to: <ivysettings> <settings defaultResolver="archiva-chain"/> <resolvers> <chain name="archiva-chain" changingPattern=".*SNAPSHOT" checkmodified="true"> <ibiblio name="archiva-internal" m2compatible="true" usepoms="true" pattern="[organization]/[module]/[revision]/[artifact]-[revision].[ext]" root="http://host:port/archiva/repository/internal"/> <ibiblio name="archiva-deploy" m2compatible="true" usepoms="true" pattern="[organization]/[module]/[revision]/[artifact]-[revision].[ext]" root="http://host:port/archiva/repository/deploy"/> <ibiblio name="archiva-snapshots" m2compatible="true" usepoms="true" pattern="[organization]/[module]/[revision]/[artifact]-[revision].[ext]" root="http://host:port/archiva/repository/snapshots"/> </chain> </resolvers> </ivysettings> The ivy.xml dependencies are simple: <ivy-module version="2.0"> <info organisation="com.myorg" module="myapp"/> <dependencies> <dependency org="com.myorg" name="myartifact" rev="1.8.0-SNAPSHOT" changing="true"/> </dependencies> </ivy-module> Ivy does not attempt to resolve a timestamped artifact. E.g. [ivy:retrieve] :: problems summary :: [ivy:retrieve] :::: WARNINGS [ivy:retrieve] module not found: com.myorg#myartifact;1.8.0-SNAPSHOT [ivy:retrieve] ==== archiva-internal: tried [ivy:retrieve] -- artifact com.myorg#myartifact;1.8.0-SNAPSHOT!myartifact.jar: [ivy:retrieve] http://host:port/archiva/repository/internal/com.myorg/myartifact/1.8.0-SNAPSHOT/myartifact-1.8.0-SNAPSHOT.jar [ivy:retrieve] ==== archiva-deploy: tried [ivy:retrieve] -- artifact com.myorg#myartifact;1.8.0-SNAPSHOT!myartifact.jar: [ivy:retrieve] http://host:port/archiva/repository/deploy/com.myorg/myartifact/1.8.0-SNAPSHOT/myartifact-1.8.0-SNAPSHOT.jar [ivy:retrieve] ==== archiva-snapshots: tried [ivy:retrieve] -- artifact com.myorg#myartifact;1.8.0-SNAPSHOT!myartifact.jar: [ivy:retrieve] http://host:port/archiva/repository/snapshots/com.myorg/myartifact/1.8.0-SNAPSHOT/myartifact-1.8.0-SNAPSHOT.jar [ivy:retrieve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:retrieve] :: UNRESOLVED DEPENDENCIES :: [ivy:retrieve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:retrieve] :: com.myorg#myartifact;1.8.0-SNAPSHOT: not found [ivy:retrieve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:retrieve] [ivy:retrieve] [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS There is a maven-metadata.xml in snapshots/com/myorg/myartifact: <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.myorg</groupId> <artifactId>myartifact</artifactId> <versioning> <latest>1.8.0-SNAPSHOT</latest> <versions> <version>1.3.0-SNAPSHOT</version> <version>1.4.2-SNAPSHOT</version> <version>1.6.1-SNAPSHOT</version> <version>1.8.0-SNAPSHOT</version> </versions> <lastUpdated>20100303003206</lastUpdated> </versioning> </metadata> The maven-metadata.xml in snapshots/com/myorg/myartifact/1.8.0-SNAPSHOT: <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.myorg</groupId> <artifactId>myartifact</artifactId> <version>1.8.0-SNAPSHOT</version> <versioning> <snapshot> <buildNumber>7</buildNumber> <timestamp>20100303.003206</timestamp> </snapshot> <lastUpdated>20100303003206</lastUpdated> </versioning> </metadata> Not all that useful, but for completeness, the files in the directory snapshots/com/myorg/myartifact/1.8.0-SNAPSHOT for the referenced snapshot: -rw-r--r-- 1 archiva archiva 240670 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.jar -rw-r--r-- 1 archiva archiva 32 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.jar.md5 -rw-r--r-- 1 archiva archiva 40 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.jar.sha1 -rw-r--r-- 1 archiva archiva 4068 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.pom -rw-r--r-- 1 archiva archiva 32 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.pom.md5 -rw-r--r-- 1 archiva archiva 40 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7.pom.sha1 -rw-r--r-- 1 archiva archiva 180821 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7-sources.jar -rw-r--r-- 1 archiva archiva 32 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7-sources.jar.md5 -rw-r--r-- 1 archiva archiva 40 Mar 3 10:32 myartifact-1.8.0-20100303.003206-7-sources.jar.sha1

    Read the article

  • Ivy and Snapshots (Nexus)

    - by Uberpuppy
    Hey folks, I'm using ant, ivy and nexus repo manager to build and store my artifacts. I managed to get everything working: dependency resolution and publishing. Until I hit a problem... (of course!). I was publishing to a 'release' repo in nexus, which is locked to 'disable redeploy' (even if you change the setting to 'allow redeploy' (really lame UI there imo). You can imagine how pissed off I was getting when my changes weren't updating through the repo before I realised that this was happening. Anyway, I now have to switch everything to use a 'Snapshot' repo in nexus. Problem is that this messes up my publish. I've tried a variety of things, including extensive googling, and haven't got anywhere whatsoever. The error I get is a bad PUT request, error code 400. Can someone who has got this working please give me a pointer on what I'm missing. Many thanks, Alastair fyi, here's my config: Note that I have removed any attempts at getting snapshots to work as I didn't know what was actually (potentially) useful and what was complete guff. This is therefore the working release-only setup. Also, please note that I've added the XXX-API ivy.xml for info only. I can't even get the xxx-common to publish (and that doesn't even have dependencies). Ant task: <target name="publish" depends="init-publish"> <property name="project.generated.ivy.file" value="${project.artifact.dir}/ivy.xml"/> <property name="project.pom.file" value="${project.artifact.dir}/${project.handle}.pom"/> <echo message="Artifact dir: ${project.artifact.dir}"/> <ivy:deliver deliverpattern="${project.generated.ivy.file}" organisation="${project.organisation}" module="${project.artifact}" status="integration" revision="${project.revision}" pubrevision="${project.revision}" /> <ivy:resolve /> <ivy:makepom ivyfile="${project.generated.ivy.file}" pomfile="${project.pom.file}"/> <ivy:publish resolver="${ivy.omnicache.publisher}" module="${project.artifact}" organisation="${project.organisation}" revision="${project.revision}" pubrevision="${project.revision}" pubdate="now" overwrite="true" publishivy="true" status="integration" artifactspattern="${project.artifact.dir}/[artifact]-[revision](-[classifier]).[ext]" /> </target> Couple of ivy files to give an idea of internal dependencies: XXX-Common project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_common" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_common" type="jar" ext="jar"/> <artifact name="xxx_common" type="pom" ext="pom"/> </publications> <dependencies> </dependencies> </ivy-module> XXX-API project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_api" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_api" type="jar" ext="jar"/> <artifact name="xxx_api" type="pom" ext="pom"/> </publications> <dependencies> <dependency org="com.myorg.xxx" name="xxx_common" rev="1.0" transitive="true" /> </dependencies> </ivy-module> IVY Settings.xml: <ivysettings> <properties file="${ivy.project.dir}/project.properties" /> <settings defaultResolver="chain" defaultConflictManager="all" /> <credentials host="${ivy.credentials.host}" realm="Sonatype Nexus Repository Manager" username="${ivy.credentials.username}" passwd="${ivy.credentials.passwd}" /> <caches> <cache name="ivy.cache" basedir="${ivy.cache.dir}" /> </caches> <resolvers> <ibiblio name="xxx_publisher" m2compatible="true" root="${ivy.xxx.publish.url}" /> <chain name="chain"> <url name="xxx"> <ivy pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/ivy-[revision].xml" /> <artifact pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="xxx" m2compatible="true" root="${ivy.xxx.repo.url}"/> <ibiblio name="public" m2compatible="true" root="${ivy.master.repo.url}" /> <url name="com.springsource.repository.bundles.release"> <ivy pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <url name="com.springsource.repository.bundles.external"> <ivy pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> </chain> </resolvers> </ivysettings>

    Read the article

  • java: how to get a string representation of a compressed byte array ?

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into a compressed file 'data.zip' server send a string representation of data.zip to the repository repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool The problem arise at step 3 when I try to get a string representation of my compressed file. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • ORACLE is WEB 2.0

    - by anca.rosu
    You never know what to expect in life, where it can take you and what kind of fulfillment it can offer you. It’s just like an amazing lottery with millions of winning tickets. My name is Paula, I am an Online Marketing Specialist at Oracle University and this is my story. Having graduated from a technical profile college, it seemed almost normal to follow the same career path. But I said no. I wanted to try something else, so I took an Advertising Masters Program and I really became in love with this entire industry. Advertising and the new impact of the Internet through social networking is my current fascination. I knew I had to work to incorporate both my skills intro one dream job. I want to believe that I have come to work at Oracle as part of a great plan that life has for me. It’s not the most glamorous job in advertising or in the fashion industry, but it’s everything you need to start investing in your development and to build relationships. A normal day at work begins at 9.30 at our Oracle Office in Bucharest. After a short chit-chat, coffee and some conference calls, marketing gets to work! Some of the members of my team are working besides me but others are based all over Europe. This is extremely useful when coordinating the EMEA Marketing for Oracle University, because this way it’s easier to keep an eye on these various locations. Even though it’s a team play, you need to speak up and make your mark. I am the kind of person that never stands-by and waits to be given directions, I am curious and intuitive. This makes things easier. In Oracle you really need to find your own way and to discover how to organize your time and how to get involved with people. People to people, this is the focus. But everything is up to you and it strongly depends on the type of personality that you have. I try to get involved in various activities, participate in Oracle Days Events, interact and meet all kinds of people. For those who are newly graduates or interns, Oracle has lots of trainings and webcasts you can attend to help you develop your career shape and to understand better the way the business works. You can also be awarded for ideas and setting the trends so that makes it worth it. What I like most about my job is the fact that I can come with ideas and bring them to life. For example Oracle University has a special seminar program called “Celebrity Seminars” where top industry speakers teach 1-day or 2-day condensed seminars. We thought of creating something exclusive and a video was the best idea. So my colleague and I became reporters for a day and interviewed this well-known speaker regarding his seminar. I think this is a good way to market this business. Live footage is a very good marketing tool so we are planning to use the video to target our online audiences via Facebook, Twitter or LinkedIn. This can even go in the newsletters that marketing sends regarding the Celebrity Seminars. This is what I meant when I said Oracle is a free spirited organization and you can surely find your place here among us. The best way to describe my job is WEB 2.0. The modern online approach comes to life while we are trying to sell our business. We need to be out there and we are responsible of spreading the buzz regarding our training offerings and our official courseware materials. There are so many new ways to interact with the target audience nowadays and I am so eager to discover the best online techniques! If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com Technorati Tags: WEB 2.0,Online Marketing,Oracle University,Bucharest,events,graduates,interns,training,webcast,seminar,newsletters,business,Facebook,Twitter,LinkedIn

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • git | error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied [SOLVED]

    - by Corbin Tarrant
    I am having a strange issue that I can't seem to resolve. Here is what happend: I had some log files in a github repository that I didn't want there. I found this script that removes files completely from git history like so: #!/bin/bash set -o errexit # Author: David Underhill # Script to permanently delete files/folders from your git repository. To use # it, cd to your repository's root and then run the script with a list of paths # you want to delete, e.g., git-delete-history path1 path2 if [ $# -eq 0 ]; then exit 0are still fi # make sure we're at the root of git repo if [ ! -d .git ]; then echo "Error: must run this script from the root of a git repository" exit 1 fi # remove all paths passed as arguments from the history of the repo files=$@ git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" HEAD # remove the temporary history git-filter-branch otherwise leaves behind for a long time rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune I, of course, made a backup first and then tried it. It seemed to work fine. I then did a git push -f and was greeted with the following messages: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything seems to have pushed fine though, because the files seem to be gone from the GitHub repository, if I try and push again I get the same thing: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything up-to-date EDIT $ sudo chgrp {user} .git/logs/refs/remotes/origin/master $ sudo chown {user} .git/logs/refs/remotes/origin/master $ git push Everything up-to-date Thanks! EDIT Uh Oh. Problem. I've been working on this project all night and just went to commit my changes: error: Unable to append to .git/logs/refs/heads/master: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/refs/heads/master sudo chgrp {user} .git/logs/refs/heads/master I try the commit again and I get: error: Unable to append to .git/logs/HEAD: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/HEAD sudo chgrp {user} .git/logs/HEAD And then I try the commit again: 16 files changed, 499 insertions(+), 284 deletions(-) create mode 100644 logs/DBerrors.xsl delete mode 100644 logs/emptyPHPerrors.php create mode 100644 logs/trimXMLerrors.php rewrite public/codeCore/Classes/php/DatabaseConnection.php (77%) create mode 100644 public/codeSite/php/init.php $ git push Counting objects: 49, done. Delta compression using up to 2 threads. Compressing objects: 100% (27/27), done. Writing objects: 100% (27/27), 7.72 KiB, done. Total 27 (delta 15), reused 0 (delta 0) To [email protected]:IAmCorbin/MooKit.git 59da24e..68b6397 master -> master Hooray. I jump on http://GitHub.com and check out the repository, and my latest commit is no where to be found. ::scratch head:: So I push again: Everything up-to-date Umm...it doesn't look like it. I've never had this issue before, could this be a problem with github? or did I mess something up with my git project? EDIT Nevermind, I did a simple: git push origin master and it pushed fine.

    Read the article

  • git problems installing stuff [closed]

    - by dale
    root@Frenzen:~# cd root@Frenzen:~# git clone --depth 1 git://source.ffmpeg.org/ffmpeg Initialized empty Git repository in /root/ffmpeg/.git/ root@Frenzen:~# cd root@Frenzen:~# git clone --depth 1 git://source.ffmpeg.org/ffmpeg Initialized empty Git repository in /root/ffmpeg/.git/ root@Frenzen:~# git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg Initialized empty Git repository in /root/ffmpeg/.git/ cd root@Frenzen:~# wget "http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz" -O ffmpeg-snapshot.tar.gz --2012-10-05 05:43:55-- http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz Resolving git.videolan.org... 2a01:e0d:1:3:58bf:fa76:0:1, 88.191.250.118 Connecting to git.videolan.org|2a01:e0d:1:3:58bf:fa76:0:1|:80... root@Frenzen:~# cd root@Frenzen:~# wget "http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz" -O ffmpeg-snapshot.tar.gz --2012-10-05 05:44:17-- http://git.videolan.org/?p=ffmpeg.git;a=snapshot;h=HEAD;sf=tgz Resolving git.videolan.org... 2a01:e0d:1:3:58bf:fa76:0:1, 88.191.250.118 Connecting to git.videolan.org|2a01:e0d:1:3:58bf:fa76:0:1|:80...

    Read the article

  • Hosting Mercurial on IIS7

    - by Lasse V. Karlsen
    Note, this might perhaps be best suited on serverfault.com, but since it is about hosting a programmer source code repository, I am not entirely sure. I'm posting here first, trusting that it'll be migrated if necessary. I'm attempting to host clones of my Mercurial repositories on my own server (I have the main repo somewhere else), and I'm attempting to set up Mercurial under IIS. I followed the guide here, but I get an error message. Solved: See bottom of this question for details. The error message is: mercurial.error.RepoError: repository /path/to/repo/or/config not found Here's what I did. I installed Mercurial 1.5.2 I created c:\inetpub\hg I downloaded the hg source as per the instructions of the webpage, and copied the hgweb.cgi file into c:\inetpub\hg (note, the webpage says hgwebdir.cgi, but this particular file does not exist, hgweb.cgi does, however, can this be the source of the problem?) I added a hgweb.config, with the following contents: [paths] repo1 = C:/hg/** [web] style = monoblue I created c:\hg, created a sub-directory test, and created a repository inside it I installed python 2.6.5, latest 2.6 version from the website (the webpage mentions I need to install the correct version or I'll get a specific error message, since I don't get an error message that looks remotely like the one mentioned, I assume that 2.6.5 is not the problem) I added a new virtual host hg.vkarlsen.no, pointing it to c:\inetpub\hg For this host, I added a script mapping under the Handler Mappings section, mapping *.cgi to c:\python26\python.exe -u %s %s as per the instructions on the website. I then tested it by navigating to http://hg.vkarlsen.no/hgweb.cgi, but I get an error message. To make it easier to test, I dropped to a command prompt, navigated to c:\inetpub\hg, and executed the following command (error message is part of the text below): C:\inetpub\hg>c:\python26\python.exe -u hgweb.cgi Traceback (most recent call last): File "hgweb.cgi", line 16, in <module> application = hgweb(config) File "mercurial\hgweb\__init__.pyc", line 12, in hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 30, in __init__ File "mercurial\hg.pyc", line 82, in repository File "mercurial\localrepo.pyc", line 2221, in instance File "mercurial\localrepo.pyc", line 62, in __init__ mercurial.error.RepoError: repository /path/to/repo/or/config not found Does anyone know what I need to look at in order to fix this? Edit: Ok, I think I managed to get one step closer to the solution, but I'm still stumped. I realized the .cgi file is a python script file, and not something compile, so I opened it for editing, and these lines was sitting in it: # Path to repo or hgweb config to serve (see 'hg help hgweb') config = "/path/to/repo/or/config" So this was my source for the specific error message. If I change the line to this: config = "c:\\hg\\test" Then I can navigate the empty repository through the Mercurial web interface. However, I want to host multiple repositories, and seeing as the line says that I can also link to a hgweb config file, I tried this: config = "c:\\inetpub\\hg\\hgweb.config" But then I get the following error message: mercurial.error.Abort: c:\inetpub\hg\hgweb.config: not a Mercurial bundle file Exception ImportError: 'No module named shutil' in <bound method bundlerepository.__del__ of <mercurial.bundlerepo.bundlerepository object at 0x0260A110>> ignored Nothing I've tried for the config variable seems to work: config = "hgweb.config" config = "c:\\hg\\hgweb.config" various other variations I don't remember. So, still stumped, pointers anyone? Solved: I ended up having to edit the hgweb.cgi file: from: from mercurial.hgweb import hgweb, wsgicgi application = hgweb(config) to: from mercurial.hgweb import hgweb, hgwebdir, wsgicgi application = hgwebdir(config) Note the added hgwebdir parts there. Here's my hgweb.config file, located in the same directory as hgweb.cgi file: [collections] C:/hg/ = C:/hg/ [web] style = gitweb This now serves my repositories successfully. Hopefully this question will give others some information if they're stumped as I was.

    Read the article

  • Problem Implementing StructureMap in VB.Net Conversion of SharpArchitecture

    - by Monkeeman69
    I work in a VB.Net environment and have recently been tasked with creating an MVC enviroment to use as a base to work from. I decided to convert the latest SharpArchitecture release (Q3 2009) into VB, which on the whole has gone fine after a bit of hair pulling. I came across a problem with Castle Windsor where my custom repository interface (lives in the core/domain project) that was reference in the constructor of my test controller was not getting injected with the concrete implementation (from the data project). I hit a brick wall with this so basically decided to switch out Castle Windsor for StructureMap. I think I have implemented this ok as everything compiles and runs and my controller ran ok when referencing a custom repository interface. It appears now that I have/or cannot now setup my generic interfaces up properly (I hope this makes sense so far as I am new to all this). When I use IRepository(Of T) (wanting it to be injected with a concrete implementation of Repository(Of Type)) in the controller constructor I am getting the following runtime error: "StructureMap Exception Code: 202 No Default Instance defined for PluginFamily SharpArch.Core.PersistenceSupport.IRepository`1[[DebtRemedy.Core.Page, DebtRemedy.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], SharpArch.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b5f559ae0ac4e006" Here are my code excerpts that I am using (my project is called DebtRemedy). My structuremap registry class Public Class DefaultRegistry Inherits Registry Public Sub New() ''//Generic Repositories AddGenericRepositories() ''//Custom Repositories AddCustomRepositories() ''//Application Services AddApplicationServices() ''//Validator [For](GetType(IValidator)).Use(GetType(Validator)) End Sub Private Sub AddGenericRepositories() ''//ForRequestedType(GetType(IRepository(Of ))).TheDefaultIsConcreteType(GetType(Repository(Of ))) [For](GetType(IEntityDuplicateChecker)).Use(GetType(EntityDuplicateChecker)) [For](GetType(IRepository(Of ))).Use(GetType(Repository(Of ))) [For](GetType(INHibernateRepository(Of ))).Use(GetType(NHibernateRepository(Of ))) [For](GetType(IRepositoryWithTypedId(Of ,))).Use(GetType(RepositoryWithTypedId(Of ,))) [For](GetType(INHibernateRepositoryWithTypedId(Of ,))).Use(GetType(NHibernateRepositoryWithTypedId(Of ,))) End Sub Private Sub AddCustomRepositories() Scan(AddressOf SetupCustomRepositories) End Sub Private Shared Sub SetupCustomRepositories(ByVal y As IAssemblyScanner) y.Assembly("DebtRemedy.Core") y.Assembly("DebtRemedy.Data") y.WithDefaultConventions() End Sub Private Sub AddApplicationServices() Scan(AddressOf SetupApplicationServices) End Sub Private Shared Sub SetupApplicationServices(ByVal y As IAssemblyScanner) y.Assembly("DebtRemedy.ApplicationServices") y.With(New FirstInterfaceConvention) End Sub End Class Public Class FirstInterfaceConvention Implements ITypeScanner Public Sub Process(ByVal type As Type, ByVal graph As PluginGraph) Implements ITypeScanner.Process If Not IsConcrete(type) Then Exit Sub End If ''//only works on concrete types Dim firstinterface = type.GetInterfaces().FirstOrDefault() ''//grabs first interface If firstinterface IsNot Nothing Then graph.AddType(firstinterface, type) Else ''//registers type ''//adds concrete types with no interfaces graph.AddType(type) End If End Sub End Class I have tried both ForRequestedType (which I think is now deprecated) and For. IRepository(Of T) lives in SharpArch.Core.PersistenceSupport. Repository(Of T) lives in SharpArch.Data.NHibernate. My servicelocator class Public Class StructureMapServiceLocator Inherits ServiceLocatorImplBase Private container As IContainer Public Sub New(ByVal container As IContainer) Me.container = container End Sub Protected Overloads Overrides Function DoGetInstance(ByVal serviceType As Type, ByVal key As String) As Object Return If(String.IsNullOrEmpty(key), container.GetInstance(serviceType), container.GetInstance(serviceType, key)) End Function Protected Overloads Overrides Function DoGetAllInstances(ByVal serviceType As Type) As IEnumerable(Of Object) Dim objList As New List(Of Object) For Each obj As Object In container.GetAllInstances(serviceType) objList.Add(obj) Next Return objList End Function End Class My controllerfactory class Public Class ServiceLocatorControllerFactory Inherits DefaultControllerFactory Protected Overloads Overrides Function GetControllerInstance(ByVal requestContext As RequestContext, ByVal controllerType As Type) As IController If controllerType Is Nothing Then Return Nothing End If Try Return TryCast(ObjectFactory.GetInstance(controllerType), Controller) Catch generatedExceptionName As StructureMapException System.Diagnostics.Debug.WriteLine(ObjectFactory.WhatDoIHave()) Throw End Try End Function End Class The initialise stuff in my gloabal.asax Dim container As IContainer = New Container(New DefaultRegistry) ControllerBuilder.Current.SetControllerFactory(New ServiceLocatorControllerFactory()) ServiceLocator.SetLocatorProvider(Function() New StructureMapServiceLocator(container)) My test controller Public Class DataCaptureController Inherits BaseController Private ReadOnly clientRepository As IClientRepository() Private ReadOnly pageRepository As IRepository(Of Page) Public Sub New(ByVal clientRepository As IClientRepository(), ByVal pageRepository As IRepository(Of Page)) Check.Require(clientRepository IsNot Nothing, "clientRepository may not be null") Check.Require(pageRepository IsNot Nothing, "pageRepository may not be null") Me.clientRepository = clientRepository Me.pageRepository = pageRepository End Sub Function Index() As ActionResult Return View() End Function The above works fine when I take out everything to do with the pageRepository which is IRepository(Of T). Any help with this would be greatly appreciated.

    Read the article

  • Error pushing to remote with git

    - by pcm2a
    I have a fresh Centos 6 server stood up and I have installed git version 1.7.1 through yum. I am using the smart http method through apache for access. When I try to push to the remote server this is what I get: $ git push origin master Password: Counting objects: 6, done. Compressing objects: 100% (3/3), done. Writing objects: 100% (6/6), 436 bytes, done. Total 6 (delta 0), reused 0 (delta 0) error: unpack failed: index-pack abnormal exit I have tried these things which made no difference: chown -R apache:apache /path/to/git/repository (httpd runs as apache) chown -R apache:users /path/to/git/repository chmod -R 777 /path/to/git/repository (obviously not secure but wanted to eliminate this being a file permission problem) What can I try to get pushing to work?

    Read the article

  • Mac OS X Server add server user

    - by Meltemi
    What's the recommended way to add a user to Mac OS X Server that doesn't need all the hoopla associated with Workgroup Manager? There are many users pre-configured in Mac OS X Server (www, root, ldapadmin, etc.) that don't have "Full Name" or mail accounts, etc. I'd like to create a 'svn' user to be the owner of our Subversion Repository as per this tutorial: If you've decided to use either Apache or stock svnserve, create a single svn user on your system and run the server process as that user. Be sure to make the repository directory wholly owned by the svn user as well. From a security point of view, this keeps the repository data nicely siloed and protected by operating system filesystem permissions, changeable by only the Sub- version server process itself. Wondering if there's a way outside of WorkgroupManager and OpenDirectory as this account will be entirely server based. Is this still sound advice under OS X Server? If so what's the easiest way to create the user (Mac OS X Server doesn't seem to respond to useradd).

    Read the article

  • Redmine: reposman.rb succeeds, but does not make SVN repos available to projects

    - by Joey Adams
    I'm testing reposman.rb on the command-line (before I make it a cron job): /usr/sbin/reposman.rb --svn-dir=/var/svn \ --redmine-host=http://example.com/projects --key='redacted' \ --owner='nobody' --group='nobody' It succeeded, printing messages for projects that didn't have repos yet: repository /var/svn/project1 created repository /var/svn/project2 created And printed nothing after running the same command again, indicating it remembered the repos. However, if I look at the Repository settings in Redmine for project1 and project2, they aren't set. Although the SVN repo is created, the Redmine projects aren't configured. How do I get reposman.rb to automatically configure Redmine projects to use the repos after they're set up?

    Read the article

  • 403 in Response to OPTIONS when updating working copy having full access

    - by user23419
    There is an SVN repository (single repository) http://example.net/svn The repository contains several projects (directories): http://example.net/svn/Project1 http://example.net/svn/Project2 User has full access to Project1 directory and has no access neither to root nor to Project2. Everything works fine for a while: user checks out http://example.net/svn/Project1, commits and updates it successfully. But sometimes trying to update leads to the following error: Command: Update Error: Server sent unexpected return value (403 Forbidden) in response to OPTIONS Error: request for 'http://example.net/svn' Finished! Why does TortoiseSVN request something in the root??? I have noticed that this happens after somebody else committed copy or move operation. Checking out http://example.net/svn/Project1 helps till next time... The main question: How to set up access rights for user to avoid these errors? Note, it's not an option to grant user any read or write access right on the root directory for security reasons.

    Read the article

  • Use apt-get source on a debian repo without using /etc/apt/source.list

    - by Erwan Queffélec
    I'm trying to use apt-get source as a regular user on a debian squeeze system. I want to retrieve the sources for cyrus-imapd-2.4 from the testing/wheezy repository. apt-get source works without root privileges; however, it seems there is no way to get apt-get to fetch anything from a repository that is not in /etc/apt/sources.list. Is there any command line option, alternate sources.list file, environment variable that will get apt to work with a custom repository ? I do have root access so I could change the /etc/apt/sources.list, however I really do not want to do that for a number of reason.

    Read the article

  • Visual SVN server Running but cannot access / browse repositories

    - by user1783560
    Operating System: Windows Web Server 2008 R2 Visual SVN Version: 2.5.7 Subversion: 1.7.7 Apache: 2.2.22 I freshly installed the Visual SVN latest version on the server and created one repository in it. In the server management window, it shows that the server is up and running but when I try to browse it in a web browser, it doesn't respond. I am not able to import my existing code into the repository: Error: Cannot connect to server open/browse the repository with either command localhost:81/svn OR http://www.myserver.com:81/svn OR http:// myIPAddress:81/svn Visual SVN log is clean. The last information in the server log is that "The server is listening to port 81.

    Read the article

  • When writing to the Windows Event Log, is it possible to create a custom link text for URLs?

    - by thomasnguyencom
    I'm just using the vanilla EventLog.WriteEntry method: EventLog.WriteEntry(EVENT_SOURCE, message, EventLogEntryType.Error, id); Here's how the message shows up in the Event Log, with the links in the parenthesis working just fine, but it's ugly: Example 1: Please contact us via email (mailto:[email protected]) or online (http://example.com). Here's how the message shows up in the Event Log, with the HTML "markup", doesn't even handle it: Example 2: Please contact us via <a href="mailto:[email protected]">email</a> or <a href="http://example.com">online</a>. This is how I would like the message to show up, but with "email" and "online" as the link texts: Example 3: Please contact us via email or online. I've tried the <a href>...</a> HTML tags with no success.

    Read the article

  • Injecting Entitymanager via XML and not annnotations.

    - by user355543
    What I am trying to do is inject through XML almost the same way that is done through A @PersistenceContext annotation. I am in need of this because of the fact I have different entity managers I need to inject into the same DAO. The databases mirror one another and I would rather have 1 base class and for instances of that base class then create multiple classes just so I can use the @PersistenceContext annotation. Here is my example. This is what I am doing now and it works. public class ItemDaoImpl { protected EntityManager entityManager; public List<Item> getItems() { Query query = entityManager.createQuery("select i from Item i"); List<Item> s = (List<Item>)query.getResultList(); return s; } public void setEntityManger(EntityManager entityManager) { this.entityManager = entityManager; } } @Repository(value = "itemDaoStore2") public class ItemDaoImplStore2 extends ItemDaoImpl { @PersistenceContext(unitName = "persistence_unit_2") public void setEntityManger(EntityManager entityManager) { this.entityManager = entityManager; } } @Repository(value = "itemDaoStore1") public class ItemDaoImplStore1 extends ItemDaoImpl { @PersistenceContext(unitName = "persistence_unit_1") public void setEntityManger(EntityManager entityManager) { this.entityManager = entityManager; } } TransactionManagers, EntityManagers are defined below... <!-- Registers Spring's standard post-processors for annotation-based configuration like @Repository --> <context:annotation-config /> <!-- For @Transactional annotations --> <tx:annotation-driven transaction-manager="transactionManager1" /> <tx:annotation-driven transaction-manager="transactionManager2" /> <!-- This makes Spring perform @PersistenceContext/@PersitenceUnit injection: --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <!-- Drives transactions using local JPA APIs --> <bean id="transactionManager1" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory1" /> </bean> <bean id="transactionManager2" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory2" /> </bean> <bean id="entityManagerFactory1" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="persistence_unit_1"/> ... </bean> <bean id="entityManagerFactory2" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="persistence_unit_2"/> ... </bean> What I want to do is to NOT create classes ItemDaoImplStore2 or ItemDaoImplStore1. I want to have these as instances of ItemDaoImpl via xml instead. I do not know how to inject the entitymanager properly though. I want to simulate annotating this as a 'Repository' annotation, and I also want to be able to specify what entityManager to inject by the persistence unit name. I want something similar to the below using XML instead. <!-- Somehow annotate this instance as a @Repository annotation --> <bean id="itemDaoStore1" class="ItemDaoImpl"> <!-- Does not work since it is a LocalContainerEntityManagerFactoryBean--> <!-- Also I would perfer to do it the same way PersistenceContext works and only provide the persistence unit name. I would like to be able to specify persistence_unit_1--> <property name="entityManager" ref="entityManagerFactory1"/> </bean> <!-- Somehow annotate this instance as a @Repository annotation --> <bean id="itemDaoStore2" class="ItemDaoImpl"> <!-- Does not work since it is a LocalContainerEntityManagerFactoryBean--> <!-- Also I would perfer to do it the same way PersistenceContext works and only provide the persistence unit name. I would like to be able to specify persistence_unit_2--> <property name="entityManager" ref="entityManagerFactory2"/> </bean>

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • Subversion LDAP Configuration

    - by dbyrne
    I am configuring a subversion repository to use basic LDAP authentication. I have an entry in my http.conf file that looks like this: <Location /company/some/location> DAV svn SVNPath /repository/some/location AuthType Basic AuthName LDAP AuthBasicProvider ldap Require valid-user AuthLDAPBindDN "cn=SubversionAdmin,ou=admins,o=company.com" AuthLDAPBindPassword "XXXXXXX" AuthLDAPURL "ldap://company.com/ou=people,o=company.com?personid" </Location> This works fine for living, breathing people who need to log in. However, I also need to provide application accounts access to the repository. These accounts are in a different OU. Do I need to add a whole new <location> element, or can I add a second AuthLDAPURLto the existing entry?

    Read the article

  • Get tortoisesvn to give me filenames with build number in the filename

    - by EricJLN
    I am on a Windows 7 box, and I have tortoisesvn on my machine. After getting a little familiar with svn and tortoisesvn on a code repository, I set up a local repository to manage revisions of some word and powerpoint documents. I want to figure out some scripted way to output a set of files with the build/revision number embedded in the filename. I will then email the files to some business people to review. For example, say I have a group of files in my working directory: PresentA.pptx PresentA-notes.docx PresentB.pptx and TortoiseSVN repo browser tells me that I am currently at revision 21 for PresentA.pptx and PresentA-notes.docx but at revision 25 for PresentB.pptx, I would like some way to get 3 files with the following names: PresentA-r21.pptx PresentA-notes-r21.docx PresentB-r25.pptx Alternatively, if revision 25 is the current value for the repository, having all the names appended with -r25 would work, too.

    Read the article

  • Restoring Subversion repositories from backup

    - by John Hoge
    Hi, I had to restore a subversion server from a backup image taken the previous night. Everything worked fine after the restore except for one repository. A working copy had been committed on the server after the latest backup, so this working copy had newer files than the restored repository. I tried to commit the files using tortoise, but SVN didn't recognize that the files on the working copy were newer than those in the repository. I'm using Subversion Server 1.6.5 on Windows 2003 Server and TortoiseSVN 1.6.8 64 bit on a Win7 64 bit client. Thanks, John

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >