Search Results

Search found 20286 results on 812 pages for 'software packaging'.

Page 376/812 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Installing Epic (Eclipse Plugin) in Pulse Explorer

    - by The Daemons Advocate
    I'm trying to install EPIC using the Pulse Explorer for Eclipse (as I'm rather fond of sharing profiles :). When I go to install the plugin under my account, I get asked for a login into http://e-p-i-c.sf.net. However, the Epic's team documentation doesn't mention anything about a login. Here's what I've done: Gone into Pulse and created a new profile based on Eclipse Classic. Navigated to Software, added the EPIC software site to list of public sites, and chosen to install it. Added Pulse item to profile. Run the installer. The error shows up while it's all downloading/installing. Login boxes start to appear for epic related components, and I don't have credentials to put in so all I can do is hit cancel. If I hit cancel, the process fails at the end with the generic error message: "an unexpected error occurred preparing to install and/or launch the selected profile". Bundles that are failing to download are: org.epic.debug org.epic.doc org.epic.lib org.epic.perleditor org.epic.regxp org.epic.source The component that's exploding is called: org.eclipse.equinox.internal.p2.repository.Credentials$LoginCancelledException I've had the same effect on Pulse 0.5.x and 0.6.x. No clue where to go from here. Might contact the EPIC and Pulse teams and ask them, but thought that I'd get a better response from here. I'm somewhat sure I'm doing something wrong.

    Read the article

  • wix The directory is in the user profile but is not listed in the RemoveFile table

    - by Venkat S. Rao
    I have the following configuration to delete and copy a file from WIX. <Directory Id='TARGETDIR' Name='SourceDir'> ... <Directory Id="AppDataFolder" Name="AppDataFolder"> <Directory Id="GleasonAppData" Name="Gleason" > <Directory Id="GleasonStudioAppData" Name="GleasonStudio"> <Directory Id="DatabaseAppData" Name ="Database"> <Directory Id="UserSandboxesAppData" Name="UserSandboxes" /> </Directory> </Directory> </Directory> </Directory> </Directory> <DirectoryRef Id="UserSandboxesAppData"> <Component Id="comp_deleteBackup" Guid="1f159f49-3029-4f46-b194-e42aabd40844"> <RemoveFile Id="RemoveBackup" Directory="UserSandboxesAppData" Name="DevelopmentBackUp.FDB" On="install" /> <RegistryKey Root="HKCU" Key="Software\Gleason\Database\RemoveBackup"> <RegistryValue Value="Removed" Type="string" KeyPath="yes" /> </RegistryKey> </Component> <Component Id="comp_createBackup" Guid="557badef-6d77-4c4e-aa5f-8d88cb5ef735"> <CopyFile Id="DBBackup" DestinationDirectory="UserSandboxesAppData" DestinationName="DevelopmentBackUp.FDB" SourceDirectory="UserSandboxesAppData" SourceName="Development.FDB" /> <RegistryKey Root="HKCU" Key="Software\Gleason\Database\CopyBackup"> <RegistryValue Value="Copied" Type="string" KeyPath="yes" /> </RegistryKey> </Component> </DirectoryRef> I get 4 errors related to ICE64--The directory 'xxx' is in the user profile but is not listed in the RemoveFile table. xxx={UserSandboxesAppData, DatabaseAppData, GleasonStudioAppData, GleasonAppData} Someone else had a very similar problem here: Directory xx is in the user profile but is not listed in the RemoveFile table. . But that solution did not help me. What do I need to change? Thank You, Venkat Rao

    Read the article

  • Receiving DB update events in .NET from SQLite

    - by Dan Tao
    I've recently discovered the awesomeness of SQLite, specifically the .NET wrapper for SQLite at http://sqlite.phxsoftware.com/. Now, suppose I'm developing software that will be running on multiple machines on the same network. Nothing crazy, probably only 5 or 6 machines. And each of these instances of the software will be accessing an SQLite database stored in a file in a shared directory (is this a bad idea? If so, tell me!). Is there a way for each instance of the app to be notifiied if one instance updates the database file? One obvious way would be to use the FileSystemWatcher class, read the entire database into a DataSet, and then ... you know ... enumerate through the entire thing to see what's new ... but yeah, that seems pretty idiotic, actually. Is there such a thing as a provider of SQLite updates? Does this even make sense as a question? I'm also pretty much a newbie when it comes to ADO.NET, so I might be approaching the problem from the entirely wrong angle.

    Read the article

  • Why do I get Detached Entity exception when upgrading Spring Boot 1.1.4 to 1.1.5

    - by mmeany
    On updating Spring Boot from 1.1.4 to 1.1.5 a simple web application started generating detached entity exceptions. Specifically, a post authentication inteceptor that bumped number of visits was causing the problem. A quick check of loaded dependencies showed that Spring Data has been updated from 1.6.1 to 1.6.2 and a further check of the change log shows a couple of issues relating to optimistic locking, version fields and JPA issues that have been fixed. Well I am using a version field and it starts out as Null following recommendation to not set in the specification. I have produced a very simple test scenario where I get detached entity exceptions if the version field starts as null or zero. If I create an entity with version 1 however then I do not get these exceptions. Is this expected behaviour or is there still something amiss? Below is the test scenario I have for this condition. In the scenario the service layer that has been annotated @Transactional. Each test case makes multiple calls to the service layer - the tests are working with detached entities as this is the scenario I am working with in the full blown application. The test case comprises four tests: Test 1 - versionNullCausesAnExceptionOnUpdate() In this test the version field in the detached object is Null. This is how I would usually create the object prior to passing to the service. This test fails with a Detached Entity exception. I would have expected this test to pass. If there is a flaw in the test then the rest of the scenario is probably moot. Test 2 - versionZeroCausesExceptionOnUpdate() In this test I have set the version to value Long(0L). This is an edge case test and included because I found reference to Zero values being used for version field in the Spring Data change log. This test fails with a Detached Entity exception. Of interest simply because the following two tests pass leaving this as an anomaly. Test 3 - versionOneDoesNotCausesExceptionOnUpdate() In this test the version field is set to value Long(1L). Not something I would usually do, but considering the notes in the Spring Data change log I decided to give it a go. This test passes. Would not usually set the version field, but this looks like a work-around until I figure out why the first test is failing. Test 4 - versionOneDoesNotCausesExceptionWithMultipleUpdates() Encouraged by the result of test 3 I pushed the scenario a step further and perform multiple updates on the entity that started life with a version of Long(1L). This test passes. Reinforcement that this may be a useable work-around. The entity: package com.mvmlabs.domain; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Version; @Entity @Table(name="user_details") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Version private Long version; @Column(nullable = false, unique = true) private String username; @Column(nullable = false) private Integer numberOfVisits; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getVersion() { return version; } public void setVersion(Long version) { this.version = version; } public Integer getNumberOfVisits() { return numberOfVisits == null ? 0 : numberOfVisits; } public void setNumberOfVisits(Integer numberOfVisits) { this.numberOfVisits = numberOfVisits; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } The repository: package com.mvmlabs.dao; import org.springframework.data.repository.CrudRepository; import com.mvmlabs.domain.User; public interface UserDao extends CrudRepository<User, Long>{ } The service interface: package com.mvmlabs.service; import com.mvmlabs.domain.User; public interface UserService { User save(User user); User loadUser(Long id); User registerVisit(User user); } The service implementation: package com.mvmlabs.service; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.support.TransactionSynchronizationManager; import com.mvmlabs.dao.UserDao; import com.mvmlabs.domain.User; @Service @Transactional(propagation=Propagation.REQUIRED, readOnly=false) public class UserServiceJpaImpl implements UserService { @Autowired private UserDao userDao; @Transactional(readOnly=true) @Override public User loadUser(Long id) { return userDao.findOne(id); } @Override public User registerVisit(User user) { user.setNumberOfVisits(user.getNumberOfVisits() + 1); return userDao.save(user); } @Override public User save(User user) { return userDao.save(user); } } The application class: package com.mvmlabs; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The POM: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mvmlabs</groupId> <artifactId>jpa-issue</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-jpa-issue</name> <description>JPA Issue between spring boot 1.1.4 and 1.1.5</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.mvmlabs.Application</start-class> <java.version>1.7</java.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> The application properties: spring.jpa.hibernate.ddl-auto: create spring.jpa.hibernate.naming_strategy: org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.database: HSQL spring.jpa.show-sql: true spring.datasource.url=jdbc:hsqldb:file:./target/testdb spring.datasource.username=sa spring.datasource.password= spring.datasource.driverClassName=org.hsqldb.jdbcDriver The test case: package com.mvmlabs; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.mvmlabs.domain.User; import com.mvmlabs.service.UserService; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired UserService userService; @Test public void versionNullCausesAnExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Null"); user.setNumberOfVisits(0); user.setVersion(null); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionZeroCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Zero"); user.setNumberOfVisits(0); user.setVersion(0L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version One"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(2L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionWithMultipleUpdates() throws Exception { User user = new User(); user.setUsername("Version One Multiple"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); user = userService.registerVisit(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(3), user.getNumberOfVisits()); Assert.assertEquals(new Long(4L), user.getVersion()); } } The first two tests fail with detached entity exception. The last two tests pass as expected. Now change Spring Boot version to 1.1.4 and rerun, all tests pass. Are my expectations wrong? Edit: This code saved to GitHub at https://github.com/mmeany/spring-boot-detached-entity-issue

    Read the article

  • Eclipse plugin installation/update issues

    - by The Elite Gentleman
    I've installed the following Team repository plugins (along with it's dependencies) for Eclipse Helios (using Eclipse updater). MercurialEclipse 1.7.1 Subclipse 1.6.17 Subversive SVN All of these are the latest in Eclipse Marketplace. My problem is when I go to Eclipse "Preferences", under "Team" I only see CVS but under Eclipse Marketplace, I can see that these plugins are installed (it gives me an option to uninstall it). How do I configure my Team repositories to reflect under "Team" in Preferences? Also, there is an update for "Eclipse IDE for Java EE developers, but when I try to update it, the following error occurs: Cannot complete the install because of a conflicting dependency. Software being installed: Eclipse IDE for Java EE Developers 1.3.2.20110301-1807 (epp.package.jee 1.3.2.20110301-1807) Software currently installed: Shared profile 1.0.0.1276787175574 (SharedProfile_epp.package.jee 1.0.0.1276787175574) Only one of the following can be installed at once: toolingepp.package.jee.configuration 1.3.2.20110301-1807 toolingepp.package.jee.configuration 1.3.0.20100617-0521 Cannot satisfy dependency: From: Shared profile 1.0.0.1276787175574 (SharedProfile_epp.package.jee 1.0.0.1276787175574) To: toolingepp.package.jee.configuration [1.3.0.20100617-0521] Cannot satisfy dependency: From: Eclipse IDE for Java EE Developers 1.3.2.20110301-1807 (epp.package.jee 1.3.2.20110301-1807) To: toolingepp.package.jee.configuration [1.3.2.20110301-1807] How do I solve it? Yes, I've spent days Googling for this issue but none solved my problem. Thanks in advance.

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • Java EE6 App + EJB in Glassfish 3.0/Netbeans 6.8?

    - by egbokul
    Has anyone got this configuration working? Latest Netbeans, latest Glassfish, I created an EJB project, also an EE Application. The EJB in itself builds & deploys to Glassfish OK. Now when I want to reference the EJB, I have to add the EJB jar to the EE Application path, if I don't do this the code does not compile. But, the EJB jar gets packaged in the App jar and as a result when I try to deploy the app to Glassfish it says: "java.lang.IllegalArgumentException: Sniffers with type [ejb] and type [appclient] should not claim the archive at the same time. Please check the packaging of your archive" How do I tell Netbeans NOT TO package the EJB in the App jar? Or is the problem somewhere else? btw. if I remove the EJB manually from the JAR then the app deploys successfully (with asadmin deploy), but when I try to run it with appclient, I get a NullPointerException. Surely there must be a solution to this, I thought Netbeans was for web application development after all...

    Read the article

  • Backwards compatibility when using Core Data

    - by Alex
    Could anybody shed some light as to why is my app crashing with the following error on iPhone OS 2.2.1 dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation I have weak linked CoreData.framework, and have the Base SDK set to 3.0 and Deployment Target set to SDK 2.2 The app already uses other 3.0 features when available and I did not have any problems with those. But apparently the backward-compatibility methods used for other features do not work with Core Data. The app crashes before app delegate's applicationDidFinishLaunching gets called. Here's the debugger log: [Session started at 2010-05-25 20:17:03 -0400.] GNU gdb 6.3.50-20050815 (Apple version gdb-1119) (Thu May 14 05:35:37 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 Loading program into debugger… sharedlibrary apply-load-rules all warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-12038-42 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 10755] [Switching to thread 10755] Re-enabling shared library breakpoint 1 Re-enabling shared library breakpoint 2 Re-enabling shared library breakpoint 3 Re-enabling shared library breakpoint 4 Re-enabling shared library breakpoint 5 (gdb) continue warning: Unable to read symbols for ""/Users/alex/iPhone Projects/AppName/build/Debug-iphoneos"/AppName.app/AppName" (file not found). dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation (gdb)

    Read the article

  • Using hg repository as web site

    - by Tex
    This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why? Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites? I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical guys (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical guys (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These guys mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical guys. What's the best way to achieve this? Edit: Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure: /root [main repository containing core files and subrepositories] /modules [modules subrepository] /sites/global [subrepository for global .css and .php files] /sites/site1 [site1 subrepository] ... /sites/siteN [siteN subrepository] Software engineers would work in the root, modules and sites/global repositories. Less technical guys (web designers) would work only in the site1 ... siteN subrepositories.

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

  • Skip makefile dependency generation for certain targets (e.g. `clean`)

    - by Shtééf
    I have several C and C++ projects that all follow a basic structure I've been using for a while now. My source files go in src/*.c, intermediate files in obj/*.[do], and the actual executable in the top level directory. My makefiles follow roughly this template: # The final executable TARGET := something # Source files (without src/) INPUTS := foo.c bar.c baz.c # OBJECTS will contain: obj/foo.o obj/bar.o obj/baz.o OBJECTS := $(INPUTS:%.cpp=obj/%.o) # DEPFILES will contain: obj/foo.d obj/bar.d obj/baz.d DEPFILES := $(OBJECTS:%.o=%.d) all: $(TARGET) obj/%.o: src/%.cpp $(CC) $(CFLAGS) -c -o $@ $< obj/%.d: src/%.cpp $(CC) $(CFLAGS) -M -MF $@ -MT $(@:%.d=%.o) $< $(TARGET): $(OBJECTS) $(LD) $(LDFLAGS) -o $@ $(OBJECTS) .PHONY: clean clean: -rm -f $(OBJECTS) $(DEPFILES) $(RPOFILES) $(TARGET) -include $(DEPFILES) Now I'm at the point where I'm packaging this for a Debian system. I'm using debuild to build the Debian source package, and pbuilder to build the binary package. The debuild step only has to execute the clean target, but even this causes the dependency files to be generated and included. In short, my question is really: Can I somehow prevent make from generating dependencies when all I want is to run the clean target?

    Read the article

  • Internet Explorer cannot 'fully' load ActiveX Control

    - by K Browne
    Context I am migrating an installer for an ActiveX control from Per-Machine to Per-User. I did this by programming the installer write to HKCU\Software\Classes instead of HKLM\Software\Classes. Problem On my machine (Windows 7 with UAC Enabled), the ActiveX control successfully loads. On the other windows 7 test machines (one with UAC enabled, one with UAC disabled), the control 'partially' loads. What is Partially? When a user visits a page with the ActiveX control, Internet Explorer displays a warning message in a yellow bar on the top of the window. If you click the 'Run add-on' button in the bar, the control becomes visible and begins to run, but Javascript code that tries to access properties of the control return the error: Library not registered. Differences between machines On the dev machine reads from HKCR\CLSID\<GUID> succeed while on the test machines these reads fail. Reads from HKCU succeed on both dev and test machines. Reads from HKLM fail on both test and dev machines. (I collected reads using Sysinternals Process Monitor) Strangely, the keys that Internet Explorer fails to read are clearly visible if I use regedit to view HKCR\CLSID\<GUID> on the test machines. Question What can I do to get the per-user control to load on the test machines? What could cause this difference between the dev machine and the test machines? Why can I see the key in HKCR with RegEdit but Internet Explorer cannot see the key? Any help is appreciated. Thank you.

    Read the article

  • How to know if a graphics card provides hardware rendering for wpf

    - by happyclicker
    I have to run a wpf-app in an environment that has all the same dell-pc's with an intel gma 3000 graphics chip (onbard, Q963/Q965). The app renders only with software rendering (Stated so by the RenderCapability.Tier-property (it says the rendering tier is 0!) and I also see this with Perforator). On all of this machines, DirectX 9c is installed and DXDiag states on many but not on all of this machines, that Direct-3d and Direct-Draw-acceleration is activated. I checked also the registry if the setup of these machines disabled wpf-hw rendering but that's also not the case. On one machine I also updated the video-driver and dx with no success. I found a lot of ressources that say, that directX must be installed and active, so that wpf does not use its own software renderer but uses the DirectX HW-Rendering. But on the above machines, DX9c is installed but there is no hw rendering. May it be that wpf uses dx-graphicscards but does the communication with the graphics card direct and not over dx? How can I find out if a specific graphics-chip has to support hardware rendering for wpf or not. The statement that the graphics card must support dx 9c seems not to be the only condition. The second question is, if wpf renders through dx, is this done through direct-3d or is direct-draw used. Is there any good documentation on this topic?

    Read the article

  • gwt maven war plugin configuration problem

    - by Din
    I am developing a gwt application in maven. In this I am using maven war plugin. Everything works fine. When I give mvn install command it builds abc.war file in target folder. But it is not copying compiled javascript files ("module1" and "module2" directories present in target) to war directory. I want to get newly compiled javascript files in war directory. How to achieve this? pom.xml file <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>example</groupId> <artifactId>example</artifactId> <packaging>war</packaging> <version>12</version> <name>gwt-maven-archetype-project</name> <properties> <!-- convenience to define GWT version in one place --> <gwt.version>2.1.0</gwt.version> <noServer>false</noServer> <skipTest>true</skipTest> <gwt.localWorkers>1</gwt.localWorkers> <JAVA_HOME>C:\Program Files\Java\jdk1.6.0_22</JAVA_HOME> <!-- convenience to define Spring version in one place --> </properties> <dependencies> <!-- Required dependencies--> </dependencies> <build> <finalName>abc</finalName> <outputDirectory>war/WEB-INF/classes</outputDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <verbose>true</verbose> <executable>${JAVA_HOME}\bin\java.exe</executable> <compilerVersion>1.6</compilerVersion> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>2.1.0</version> <executions> <execution> <goals> <goal>compile</goal> <goal>generateAsync</goal> <goal>mergewebxml</goal> <goal>test</goal> </goals> </execution> </executions> <configuration> <servicePattern>**/client/**/*Service.java</servicePattern> <noServer>${noServer}</noServer> <noserver>${noServer}</noserver> <modules> <module>com.abc.example.Module1</module> <module>com.abc.example.Module2</module> </modules> <runTarget>com.abc.example.Module1/module1.jsp</runTarget> <port>8080</port> <extraJvmArgs>-Xmx1024m -Xms1024m -Xss1024k -Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ThreadedPermutationWorkerFactory</extraJvmArgs> <hostedWebapp>war</hostedWebapp> <warSourceDirectory>${basedir}/war</warSourceDirectory> <webXml>${basedir}/war/WEB-INF/web.xml</webXml> </configuration> </plugin> <plugin> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.1-beta-1</version> <configuration> <warSourceDirectory>${basedir}/war</warSourceDirectory> <webXml>${basedir}/war/WEB-INF/web.xml</webXml> <!--<webXml>src/main/webapp/WEB-INF/web.xml</webXml>--> <containerConfigXML>war/WEB-INF/classes/context/context.xml</containerConfigXML> <warSourceExcludes>.gwt-tmp/**</warSourceExcludes> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.4.2</version> <configuration> <argLine>-Xmx1024m</argLine> <skipTests>${skipTest}</skipTests> </configuration> </plugin> <plugin> <artifactId>maven-clean-plugin</artifactId> <version>2.2</version> <configuration> <filesets> <fileset> <directory>war/module1</directory> </fileset> <fileset> <directory>war/module2</directory> </fileset> <fileset> <directory>war/WEB-INF/lib</directory> </fileset> </filesets> </configuration> </plugin> </plugins> <resources> <resource> <directory>src/main/resources</directory> <excludes> <exclude>**/public/resources/**</exclude> <exclude>**/public/images/**</exclude> </excludes> <filtering>true</filtering> </resource> </resources> <filters> <filter>src/main/resources/build/build-${env}.properties</filter> </filters> </build> <profiles> <profile> <activation> <activeByDefault>true</activeByDefault> </activation> <id>dev</id> <properties> <env>dev</env> </properties> </profile> </profiles> <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> </plugins> </reporting>

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • moving audio over a local network using GStreamer

    - by James Turner
    I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution). The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone. GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available. In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

    Read the article

  • Cannot install Visual Editor Plugin on Eclipse

    - by lyuba
    I try to follow the instructions from here to install a Visual editor pulgin for Eclipse: http://wiki.eclipse.org/VE/Update Both online and offline installations fails with the following mistake: Cannot complete request. Generating details. Here is the complete log: Cannot complete the install because of a conflicting dependency. Software being installed: Java EMF Model 1.4.0.v20090826-1446-7H-FPbAcggQleH8hJifHfUd (org.eclipse.jem.feature.group 1.4.0.v20090826-1446-7H-FPbAcggQleH8hJifHfUd) Software currently installed: Eclipse IDE for Java EE Developers 1.2.2.20100217-2310 (epp.package.jee 1.2.2.20100217-2310) Only one of the following can be installed at once: Java EMF Model BeanInfo (Introspection) Support 2.0.300.v200905030615 (org.eclipse.jem.beaninfo 2.0.300.v200905030615) Java EMF Model BeanInfo (Introspection) Support 2.0.300.R3_1_maintenance (org.eclipse.jem.beaninfo 2.0.300.R3_1_maintenance) Cannot satisfy dependency: From: Eclipse IDE for Java EE Developers 1.2.2.20100217-2310 (epp.package.jee 1.2.2.20100217-2310) To: org.eclipse.epp.package.jee.feature.feature.group [1.2.2.20100217-2310] Cannot satisfy dependency: From: Java EE IDE Feature 1.2.2.20100217-2310 (org.eclipse.epp.package.jee.feature.feature.group 1.2.2.20100217-2310) To: org.eclipse.jst.web_ui.feature.feature.group 0.0.0 Cannot satisfy dependency: From: Java EMF Model 1.4.0.v20090826-1446-7H-FPbAcggQleH8hJifHfUd (org.eclipse.jem.feature.group 1.4.0.v20090826-1446-7H-FPbAcggQleH8hJifHfUd) To: org.eclipse.jem.beaninfo [2.0.300.R3_1_maintenance] Cannot satisfy dependency: From: JST Web Core 3.1.1.v200908121609-7S7CFyvFIhIehVidwyfk0m (org.eclipse.jst.web_core.feature.feature.group 3.1.1.v200908121609-7S7CFyvFIhIehVidwyfk0m) To: org.eclipse.jem.beaninfo [2.0.300.v200905030615] Cannot satisfy dependency: From: JST Web Core 3.1.1.v200908121609-7S7CG-dFIhIeq7kV6qxaLD (org.eclipse.jst.web_core.feature.feature.group 3.1.1.v200908121609-7S7CG-dFIhIeq7kV6qxaLD) To: org.eclipse.jem.beaninfo [2.0.300.v200905030615] Cannot satisfy dependency: From: JST Web UI 3.1.1.v200908121609-7E77FBfDlwYa_9sdy2q77doi14gl (org.eclipse.jst.web_ui.feature.feature.group 3.1.1.v200908121609-7E77FBfDlwYa_9sdy2q77doi14gl) To: org.eclipse.jst.web_core.feature.feature.group [3.1.1.v200908121609-7S7CFyvFIhIehVidwyfk0m] Cannot satisfy dependency: From: JST Web UI 3.1.1.v200908121609-7E77FBiDlwYcICNdz-5z-9PGqZCy (org.eclipse.jst.web_ui.feature.feature.group 3.1.1.v200908121609-7E77FBiDlwYcICNdz-5z-9PGqZCy) To: org.eclipse.jst.web_core.feature.feature.group [3.1.1.v200908121609-7S7CG-dFIhIeq7kV6qxaLD] Has anybody encountered something like this? Appreciate your ideas!

    Read the article

  • Maven String Replace of Text Web Resources

    - by Jaco van Niekerk
    I have a Maven web application with text files in src/main/webapp/textfilesdir As I understand it, during the package phase this textfilesdir directory will be copied into the target/project-1.0-SNAPSHOT directory, which is then zipped up into a target/project-1.0-SNAPSHOT.war Problem Now, I need to do a string replacement on the contents of the text files in target/project-1.0-SNAPSHOT/textfilesdir. This must then be done after the textfilesdir is copied into target/project-1.0-SNAPSHOT, but prior to the target/project-1.0-SNAPSHOT.war file being created. I believe this is all done during the package phase. How can a plugin (potentially maven-antrun-plugin), plug into the package phase to do this. The text files don't contain properties, like ${property-name} to filter on. String replacement is likely the only option. Options Modify the text files after the copy into target/project-1.0-SNAPSHOT directory, yet prior to the WAR creation. After packaging, extract the text files from WAR, modify them, and add them back into the WAR. I'm thinking there is another option here I'm missing. Thoughts anyone?

    Read the article

  • Deployment Setup (.Net) - Search target machine -> Registry search (64 bit)

    - by Joonas Kirsebom
    I have a windows installer project which installs some software (winform, service, mce addin). During the installation I need to search the machine for a registry key. This is done with with the "Launch Condition" - "Add Registry Search" (Deployment Project). I have filled out all the properties right, and checked against the regestry that the value actually can be found. The problem is that the "Registry Search" searches in the x86 part of the registry (HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\...) although my system is a x64 and the deployment setup is also set to x64. Does anyone know how to force the "Registry Search" to search the x64 registry? Or know about a workaround? The weird thing about this, is that Registry setting in the deployment setup is writing to the right registry (x64). My idea is that the "Registry Search" program is only developed to the x86 architecture, and therefore can't read the right registry. I found this article from microsoft, so it seams that they know about this problem. https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=110105&wa=wsignin1.0#details My system is: Windows 7 64bit Visual Studio 2008

    Read the article

  • C# COM Cross Thread problem

    - by user364676
    Hi, we're developing a software to control a scientific measuring device. it provides a COM-Interface defines serveral functions to set measurement parameters and fires an event when it measured data. in order to test our software, i'm implementing a simulation of that device. the com-object runs a loop which periodically fires the event. another loop in the client app should now setup up the com-simulator using the given functions. i created a class for measuring parameters which will be instanciated when setting up a new measurement. // COM-Object public class MeasurementParams { public double Param1; public double Param2; } public class COM_Sim : ICOMDevice { public MeasurementParams newMeasurement; IClient client; public int NewMeasurement() { newMeasurment = new MeasurementParam(); } public int SetParam1(double val) { // why is newMeasurement null when method is called from client loop newMeasurement.Param1 = val; } void loop() { while(true) { // fire event client.HandleEvent; } } } public class Client : IClient { ICOMDevice server; public int HandleEvent() { // handle this event server.NewMeasurement(); server.SetParam1(0.0); } void loop() { while(true) { // do some stuff... server.NewMeasurement(); server.SetParam1(0.0); } } } both of the loops run in independent threads. when server.NewMeasurement() is called, the object on the server is set to a new instance. but in the next function, the object is null again. do the same when handling the server-event, it works perfectly, because the method runs in the servers thread. how to make it work from client-thread as well. as the client is meant to be working with the real device, i cannot modify the interfaces given by the manufactor. also i need to setup measurements independent from the event-handler, which will be fired not regulary. i assume this problem related to multithreaded-COM behavior but i found nothing on this topic.

    Read the article

  • Stack Overflow when debugging application in iPhone simulator

    - by mjdth
    I'm getting this every time I attempt to debug my app in the simulator: [Session started at 2010-05-11 16:16:52 -0500.] GNU gdb 6.3.50-20050815 (Apple version gdb-1467) (Wed Apr 21 06:57:21 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 51573. Program received signal: “EXC_BAD_ACCESS”. Data Formatters temporarily unavailable, will re-try after a 'continue'. (Cannot call into the loader at present, it is locked.) I've looked around and found a few similar cases, but they all seem to be related to a missing file and an extra necessary build phase. I'm getting no notification of a missing file here so I'm not sure where to start to fix this and get the app running again. Thanks for any insight!

    Read the article

  • I have two choices of Master's classes this fall. Which is the most useful?

    - by ahplummer
    (For background purposes and context): I am a Software Engineer, and manage other Software Engineers currently. I kind of wear two hats right now: one of a programmer, and one as a 'team lead'. In this regard, I've started going back to school to get my Master's degree with an emphasis in Computer Science. I already have a Bachelor's in Computer Science, and have been working in the field for about 13 years. Our primary development environment is a Windows environment, writing in .NET, Delphi, and SQL Server. Choice #1: CST 798 DATA VISUALIZATION Course Description: Basically, this is a course on the "Processing" language: http://processing.org/ Choice #2: CST 711 INFORMATICS Course Description: (From catalog): Informatics is the science of the use and processing of data, information, and knowledge. This course covers a variety of applied issues from information technology, information management at a variety of levels, ranging from simple data entry, to the creation, design and implementation of new information systems, to the development of models. Topics include basic information representation, processing, searching, and organization, evaluation and analysis of information, Internet-based information access tools, ethics and economics of information sharing.

    Read the article

  • Development/runtime Licensing mechanism for a C# class library?

    - by Darryl
    I'm developing a .Net class library (a data provider) and I'm starting to think about how I would handle licensing the library to prospective purchasers. By licensing, I mean the mechanics of trying to prevent my library from being used by those who haven't purchased it, not the software license (i.e., Apache, Gnu, etc). I've never dealt with licensing, and in the past, I've always developed apps, not libraries. I don't want to make things difficult for my customers; know it is not possible to make it ironclad. Just some mechanism that gives me decent protection without making the customer jump through hoops or gnash their teeth. I think the mechanism would check for a valid license when the class is being used in development mode, and not in runtime mode (when the customer's software is released to their customers). I think libraries are typically sold per developer, but I'm not sure how that could be accomplished without making the mechanism odious for my customers; maybe that gets left to the honor system. I Googled this and found many approaches. Ideally, I'd like to do something that is generally accepted and common, the "right" way class libraries are licensed, if that exists, rather than making my customers deal with yet another license mechanism. A firm push in the right direction will be greatly appreciated!

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >