Search Results

Search found 10285 results on 412 pages for 'enterprise repository'.

Page 390/412 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • IOS - Performance Issues with SVProgressHUD

    - by ScottOBot
    I have a section of code that is producing pour performance and not working as expected. it utilizes the SVProgressHUD from the following github repository at https://github.com/samvermette/SVProgressHUD. I have an area of code where I need to use [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]. I want to have a progress hud displayed before the synchronous request and then dismissed after the request has finished. Here is the code in question: //Display the progress HUB [SVProgressHUD showWithStatus:@"Signing Up..." maskType:SVProgressHUDMaskTypeClear]; NSError* error; NSURLResponse *response = nil; [self setUserCredentials]; // create json object for a users session NSDictionary* session = [NSDictionary dictionaryWithObjectsAndKeys: firstName, @"first_name", lastName, @"last_name", email, @"email", password, @"password", nil]; NSData *jsonSession = [NSJSONSerialization dataWithJSONObject:session options:NSJSONWritingPrettyPrinted error:&error]; NSString *url = [NSString stringWithFormat:@"%@api/v1/users.json", CoffeeURL]; NSURL *URL = [NSURL URLWithString:url]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:URL cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:30.0]; [request setHTTPMethod:@"POST"]; [request setValue:@"application/json" forHTTPHeaderField:@"Accept"]; [request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; [request setValue:[NSString stringWithFormat:@"%d", [jsonSession length]] forHTTPHeaderField:@"Content-Length"]; [request setHTTPBody:jsonSession]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSString *dataString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding]; NSDictionary *JSONResponse = [NSJSONSerialization JSONObjectWithData:[dataString dataUsingEncoding:NSUTF8StringEncoding] options:NSJSONReadingMutableContainers error:&error]; NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse *)response; NSInteger statusCode = httpResponse.statusCode; NSLog(@"Status: %d", statusCode); if (JSONResponse != nil && statusCode == 200) { //Dismiss progress HUB here [SVProgressHUD dismiss]; return YES; } else { [SVProgressHUD dismiss]; return NO; } For some reason the synchronous request blocks the HUB from being displayed. It displays immediately after the synchronous request happens, unfortunatly this is also when it is dismissed. The behaviour displayed to the user is that the app hangs, waits for the synchronous request to finish, quickly flashes the HUB and then becomes responsive again. How can I fix this issue? I want the SVProgressHUB to be displayed during this hang time, how can I do this?

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • Is One Tool or a Suite of Tools Better for Scrum?

    - by Rob Wells
    G'day, Edit: We've been using Scrum very successfully for several years on several projects of varying sizes. In fact, our team developed the successful iPlayer project for the BBC using a classical Scrum approach. After using various combinations of tools, some high-tech, some low-tech, across these projects we now wish to try adopting a suitable tool suite. Our manager is to some extent attempting to force the adoption of a single suite of tools for Scrum. I've looked at the SO question "Best Scrum tools" and most people seem to recommend either: a suite of low-tech solutions, e.g. whiteboards, post-its, index cards, etc., or a monolithic tool that tries to satisfy as much as possible of the process, e.g. Agilo, Mingle, ScrumWorks, Target Process, etc. Our team is currently evaluating several different Scrum tools. However, we are looking at selecting a single, monolithic tool, e.g. Agilo. All of the "one-stop" solutions have their strengths and weaknesses with the serious enterprise type solutions being the best sort of fit. But all have some short comings. After reading the paper "Peer Code Review: An Agile Process" over at SmartBear I started wondering if we were trying to force adoption of a tool on a "best fit" basis. I think you can take a couple of reference artefacts of the Scrum development process, say user stories, epics and themes, and the code base which must use a well-known SCM, e.g. SVN, Hg, etc. Then if we take that as the common reference points for the tools employed then we would be able to use a group of tools to handle the different aspects of the Scrum process rather than try forcing a fit of a single tool would is a bit like forcing a square peg into the round hole. In this way, providing you've agreed your common reference points, you can use several tools, each performing their role better than a could be done by a single component in a monolithic tool suite. Is this a more sensible approach? Are the two reference points I mentioned above suitable, or is their a better choice of points where the tools would meet? cheers,

    Read the article

  • Custom validator not invoked when using Validation Application Block through configuration

    - by Chris
    I have set up a ruleset in my configuration file which has two validators, one of which is a built-in NotNullValidator, the other of which is a custom validator. The problem is that I see the NotNullValidator hit, but not my custom validator. The custom validator is being used to validate an Entity Framework entity object. I have used the debugger to confirm the NotNull is hit (I forced a failure condition so I saw it set an invalid result), but it never steps into the custom one. I am using MVC as the web app, so I defined the ruleset in a config file at that layer, but my custom validator is defined in another project. However, I wouldn't have thought that to be a problem because when I use the Enterprise Library Configuration tool inside Visual Studio 2008 it is able to set the type properly for the custom validator. As well, I believe the custom validator is fine as it builds ok, and the config tool can reference it properly. Does anybody have any ideas what the problem could be, or even what to do/try to debug further? Here is a stripped down version of my custom validator: [ConfigurationElementType(typeof(CustomValidatorData))] public sealed class UserAccountValidator : Validator { public UserAccountValidator(NameValueCollection attributes) : base(string.Empty, "User Account") { } protected override string DefaultMessageTemplate { get { throw new NotImplementedException(); } } protected override void DoValidate(object objectToValidate, object currentTarget, string key, ValidationResults results) { if (!currentTarget.GetType().Equals(typeof(UserAccount))) { throw new Exception(); } UserAccount userAccountToValidate = (UserAccount)currentTarget; // snipped code ... this.LogValidationResult(results, "The User Account is invalid", currentTarget, key); } } Here is the XML of my ruleset in Validation.config (the NotNull rule is only there to force a failure so I could see it getting hit, and it does): <validation> <type defaultRuleset="default" assemblyName="MyProj.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="MyProj.Entities.UserAccount"> <ruleset name="default"> <properties> <property name="HashedPassword"> <validator negated="true" messageTemplate="" messageTemplateResourceName="" messageTemplateResourceType="" tag="" type="Microsoft.Practices.EnterpriseLibrary.Validation.Validators.NotNullValidator, Microsoft.Practices.EnterpriseLibrary.Validation, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Not Null Validator" /> </property> <property name="Property"> <validator messageTemplate="" messageTemplateResourceName="" messageTemplateResourceType="" tag="" type="MyProj.Entities.UserAccountValidator, MyProj.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Custom Validator" /> </property> </properties> </ruleset> </type> </validation> And here is the stripped down version of the way I invoke the validation: var type = entity.GetType() var validator = ValidationFactory.CreateValidator(type, "default", new FileConfigurationSource("Validation.config")) var results = validator.Validate(entity) Any advice would be much appreciated! Thanks, Chris

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Maven:install jar file during build process

    - by Venkata
    I have got a requirement as follows. I need to run ant build file during maven build process. I need to invoke the build.xml from my pom.xml file. I have done that using maven-antrun-plugin. Now I need to install the ant build generated jar file automatically into my local repository before maven compiles my project source. I tried using build-helper-maven-plugin but it did not help. Either I am doing something wrong, or i am not doing right. Please help. Update Thank you. ant maven tasks worked for me as well. However I am runing into the following exception at the end of the build process. Any help is highly appreciated. org.apache.tools.ant.ExitException: Permission (java.lang.RuntimePermission exitVM) was not granted. at org.apache.tools.ant.types.Permissions$MySM.checkExit(Permissions.java:196) at java.lang.Runtime.exit(Runtime.java:99) at java.lang.System.exit(System.java:275) at org.codehaus.classworlds.Launcher.main(Launcher.java:376) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:599) at org.apache.tools.ant.taskdefs.ExecuteJava.run(ExecuteJava.java:217) at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:152) at org.apache.tools.ant.taskdefs.Java.run(Java.java:771) at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:221) at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:135) at org.apache.tools.ant.taskdefs.Java.execute(Java.java:108) at org.apache.maven.artifact.ant.Mvn.execute(Mvn.java:81) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:599) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)

    Read the article

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • is appassembler plugin broken for java service wrapper on windows 64bit?

    - by Paul McKenzie
    Hi I'm developing on 32bit windows and am using appassembler to create a java service wrapper assembly, and it works ok. But I need to also create a 64bit assembly for deployment to a dev server. In the following config I have substituted the 32bit platform with the 64bit, see the <includes> section. But it no longer places the wrapper jar and dll in the lib folder. If I omit the includes completely, I get linux, solaris, Mac OSX and Win32 libraries, but no win64. Anyone got this working? <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>appassembler-maven-plugin</artifactId> <version>1.1-SNAPSHOT</version> <configuration> <target>${project.build.directory}/appassembler</target> <repositoryLayout>flat</repositoryLayout> <defaultJvmSettings> <initialMemorySize>256M</initialMemorySize> <maxMemorySize>1024M</maxMemorySize> </defaultJvmSettings> <daemons> <daemon> <id>MyApp</id> <mainClass>com.foo.AppMain</mainClass> <platforms> <platform>jsw</platform> </platforms> <generatorConfigurations> <generatorConfiguration> <generator>jsw</generator> <includes> <include>windows-x86-64</include> </includes> <configuration> <property> <name>set.default.REPO_DIR</name> <value>../../repo</value> </property> </configuration> </generatorConfiguration> </generatorConfigurations> </daemon> </daemons> </configuration> <executions> <execution> <goals> <goal>generate-daemons</goal> <goal>create-repository</goal> </goals> </execution> </executions> </plugin>

    Read the article

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

  • C#: BackgroundWorker cloning resources?

    - by Dav
    The problem I've been struggling with this partiular problem for two days now and just run out of ideas. A little... background: we have a WinForms app that needs to access a database, construct a list of related in-memory objects from that data, and then display on a DataGridView. Important point is that we first populate an app-wide cache (List), and then create a mirror of the cache local to the form on which the DGV lives (using List constructor param). Because fetching the data takes a good few seconds (DB sits on a LAN server) to load, we decided to use a BackgroundWorker, and only refresh the DGV once the data is loaded. However, it seems that doing the loading via a BGW results in some memory leak... or an error on my part. When loaded using a blocking method call, the app consumes about 30MB of RAM; with a BGW this jumps to 80MB! While it may not seem as much anyway, our clients are not too happy about it. Relevant code Form private void MyForm_Load(object sender, EventArgs e) { MyRepository.Instance.FinishedEvent += RefreshCache; } private void RefreshCache(object sender, EventArgs e) { dgvProducts.DataSource = new List<MyDataObj>(MyRepository.Products); } Repository private static List<MyDataObj> Products { get; set; } public event EventHandler ProductsLoaded; public void GetProductsSync() { List<MyDataObj> p; using (MyL2SDb db = new MyL2SDb(MyConfig.ConnectionString)) { p = db.PRODUCTS .Select(p => new MyDataObj {Id = p.ID, Description = p.DESCR}) .ToList(); } Products = p; // tell the form to refresh UI if (ProductsLoaded != null) ProductsLoaded(this, null); } public void GetProductsAsync() { using (BackgroundWorker myWorker = new BackgroundWorker()) { myWorker.DoWork += delegate { List<MyDataObj> p; using (MyL2SDb db = new MyL2SDb(MyConfig.ConnectionString)) { p = db.PRODUCTS .Select(p => new MyDataObj {Id = p.ID, Description = p.DESCR}) .ToList(); } Products = p; }; // tell the form to refresh UI when finished myWorker.RunWorkerCompleted += GetProductsCompleted; myWorker.RunWorkerAsync(); } } private void GetProductsCompleted(object sender, RunWorkerCompletedEventArgs e) { if (ProductsLoaded != null) ProductsLoaded(this, null); } End! GetProductsSync or GetProductsAsync are called on the main thread, not shown above. Could it be that the GarbageCollector just gets lost with two threads? Or is it the task manager that shows incorrect values? Will be greateful for any responses, suggestions, criticism.

    Read the article

  • Should I use Python or Assembly for a super fast copy program

    - by PyNEwbie
    As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • How to get hibernate3-maven-plugin hbm2ddl to find JDBC driver?

    - by HDave
    I have a Java project I am building with Maven. I am now trying to get the hibernate3-maven-plugin to run the hbm2ddl tool to generate a schema.sql file I can use to create the database schema from my annotated domain classes. This is a JPA application that uses Hibernate as the provider. In my persistence.xml file I call out the mysql driver: <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/> <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/> When I run Maven, I see it processing all my classes, but when it goes to output the schema, I get the following error: ERROR org.hibernate.connection.DriverManagerConnectionProvider - JDBC Driver class not found: com.mysql.jdbc.Driver java.lang.ClassNotFoundException: com.mysql.jdbc.Driver I have the MySQL driver as a dependency of this module. However it seems like the hbm2ddl tool cannot find it. I would have guessed that the Maven plugin would have known to search the local Maven file repository for this driver. What gives? The relevant part of my pom.xml is this: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <persistenceunit>my-unit</persistenceunit> </componentProperties> </configuration> </plugin>

    Read the article

  • Storing simulation results in a persistent manner for Python?

    - by Az
    Background: I'm running multiple simuations on a set of data. For each session, I'm allocating projects to students. The difference between each session is that I'm randomising the order of the students such that all the students get a shot at being assigned a project they want. I was writing out some of the allocations in a spreadsheet (i.e. Excel) and it basically looked like this (tiny snapshot, actual table extends to a few thousand sessions, roughly 100 students). | | Session 1 | Session 2 | Session 3 | |----------|-----------|-----------|-----------| |Stu1 |Proj_AA |Proj_AB |Proj_AB | |----------|-----------|-----------|-----------| |Stu2 |Proj_AB |Proj_AA |Proj_AC | |----------|-----------|-----------|-----------| |Stu3 |Proj_AC |Proj_AC |Proj_AA | |----------|-----------|-----------|-----------| Now, the code that deals with the allocation currently stores a session in an object. The next time the allocation is run, the object is over-written. Thus what I'd really like to do is to store all the allocation results. This is important since I later need to derive from the data, information such as: which project Stu1 got assigned to the most or perhaps how popular Proj_AC was (how many times it was assigned / number of sessions). Question(s): What methods can I possibly use to basically store such session information persistently? Basically, each session output needs to add itself to the repository after ending and before beginning the next allocation cycle. One solution that was suggested by a friend was mapping these results to a relational database using SQLAlchemy. I kind of like the idea since this does give me an opportunity to delve into databases. Now the database structure I was recommended was: |----------|-----------|-----------| |Session |Student |Project | |----------|-----------|-----------| |1 |Stu1 |Proj_AA | |----------|-----------|-----------| |1 |Stu2 |Proj_AB | |----------|-----------|-----------| |1 |Stu3 |Proj_AC | |----------|-----------|-----------| |2 |Stu1 |Proj_AB | |----------|-----------|-----------| |2 |Stu2 |Proj_AA | |----------|-----------|-----------| |2 |Stu3 |Proj_AC | |----------|-----------|-----------| |3 |Stu1 |Proj_AB | |----------|-----------|-----------| |3 |Stu2 |Proj_AC | |----------|-----------|-----------| |3 |Stu3 |Proj_AA | |----------|-----------|-----------| Here, it was suggested that I make the Session and Student columns a composite key. That way I can access a specific record for a particular student for a particular session. Or I can merely get the entire allocation run for a particular session. Questions: Is the idea a good one? How does one implement and query a composite key using SQLAlchemy? What happens to the database if a particular student is not assigned a project (happens if all projects that he wants are taken)? In the code, if a student is not assigned a project, instead of a proj_id he simply gets None for that field/object. I apologise for asking multiple questions but since these are closely-related, I thought I'd ask them in the same space.

    Read the article

  • while creating archetype getting following error

    - by munna
    D:\Training\workspace\vppsourcemvn archetype:generate -B -DarchetypeGroupId=org .appfuse.archetypes -DarchetypeArtifactId=appfuse-modular-struts-archetype -Darc hetypeVersion=2.1.0-M1 -DgroupId=com.vmware -DartifactId=vpp [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [WARNING] Error reading archetype catalog http://repo1.maven.org/maven2 org.apache.maven.wagon.TransferFailedException: Error transferring file: Connect ion timed out: connect at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:143) at org.apache.maven.wagon.StreamWagon.getInputStream(StreamWagon.java:11 6) at org.apache.maven.wagon.StreamWagon.getIfNewer(StreamWagon.java:88) at org.apache.maven.wagon.StreamWagon.get(StreamWagon.java:61) at org.apache.maven.archetype.source.RemoteCatalogArchetypeDataSource.ge tArchetypeCatalog(RemoteCatalogArchetypeDataSource.java:97) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:195) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:184) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.getArchetypesB yCatalog(DefaultArchetypeSelector.java:278) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.selectArchetyp e(DefaultArchetypeSelector.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execu te(CreateProjectFromArchetypeMojo.java:186) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at sun.net.NetworkClient.doConnect(NetworkClient.java:163) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.http.HttpClient.(HttpClient.java:233) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.http.HttpClient.New(HttpClient.java:323) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC onnection.java:860) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne ction.java:801) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection .java:726) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon nection.java:1049) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:373 ) at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:115) ... 28 more [WARNING] No archetype found in Remote catalog. Defaulting to internal Catalog [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 46 seconds [INFO] Finished at: Wed Jun 09 16:11:07 IST 2010 [INFO] Final Memory: 11M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • NHibernate GenericADO Exception

    - by Ris90
    Hi, I'm trying to make simple many-to-one association, using NHibernate.. I have class Recruit with this mapping: <class name="Recruit" table="Recruits"> <id name="ID"> <generator class="native"/> </id> <property name="Lastname" column="lastname"/> <property name="Name" column="name"/> <property name="MedicalReport" column="medicalReport"/> <property name="DateOfBirth" column ="dateOfBirth" type="Date"/> <many-to-one name="AssignedOnRecruitmentOffice" column="assignedOnRecruitmentOffice" class="RecruitmentOffice"/> which is many-to-one connected to RecruitmentOffices: <class name="RecruitmentOffice" table="RecruitmentOffices"> <id name="ID" column="ID"> <generator class="native"/> </id> <property name="Chief" column="chief"/> <property name="Name" column="name"/> <property name ="Address" column="address"/> <set name="Recruits" cascade="save-update" inverse="true" lazy="true"> <key> <column name="AssignedOnRecruitmentOffice"/> </key> <one-to-many class="Recruit"/> </set> And create Repository class with method Insert: public void Insert(Recruit recruit) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(recruit); transaction.Commit(); } } then I try to save new recrui to base: Recruit test = new Recruit(); RecruitmentOffice office = new RecruitmentOffice(); ofice.Name = "test"; office.Chief = "test"; test.AssignedOnRecruitmentOffice = office; test.Name = "test"; test.DateOfBirth = DateTime.Now; RecruitRepository testing = new RecruitRepository(); testing.Insert(test); And have this error GenericADOException could not insert: [OSiUBD.Models.DAO.Recruit][SQL: INSERT INTO Recruits (lastname, name, medicalReport, dateOfBirth, assignedOnRecruitmentOffice) VALUES (?, ?, ?, ?, ?); select SCOPE_IDENTITY()] on session.Save

    Read the article

  • How hard programming is? Really. [closed]

    - by Bubba88
    Hi! The question is about your perception of programming activity. How hard/exacting this task is? There is much buzz about programming nowadays, people say that programmers are smart, very technical and abstract at a time, know much about world, psychology etc.. They say, that programmers got really powerful brain thing, cause there is much to keep in consideration simultaneously again with much information folded into each other associatively (up 10 levels of folding they say))) Still, there are some terms to specify at our own.. So that is the question: What do you think about programming in general? Is it hard? Is it 'for everyone' or for the particular kind of people only? How much non-CS background do you need to program (just to program, really; enterprise applications for example)? How long is the learning curve? (again, for programming in general) And another bunch of random questions: - If you were not to like/love programming, would that be a serious trouble bothering your current employment? - If you were to start from the beginning, would you chose that direction this time? - What other areas (jobs or maybe hobbies) are comparable to programming in the way they can explode someone's lovely brain? - Is 'non turing-complete programming' (SQL, XML, etc.) comparable to what we do or is it really way easier, less requiring, cheap and akin to cooking :)? Well, the essence is: How would you describe programming activity WRT to its difficulty? Or, on the other hand: Did you ever catch yourself thinking at some point: OMG, it's sooo hard! I don't know how would I ever program, even carried away this way and doing programming just for fun? It's very interesting to know your opinion, your'e the programmers after all. I mean much people must be exaggerating/speculating about the thing they do not really know about. But that musn't be the case here on SO :) P.S.: I'll try my best to update this post later, and you please edit it too. At least I'll get decent English in my question text :)

    Read the article

  • How to work with file and streams in php,case: if we open file in Class A and pass open stream to Cl

    - by Rachel
    I have two class, one is Business Logic Class{BLO} and the other one is Data Access Class{DAO} and I have dependency in my BLO class to my Dao class. Basically am opening a csv file to write into it in my BLO class using inside its constructor as I am creating an object of BLO and passing in file from command prompt: Code: $this->fin = fopen($file,'w+') or die('Cannot open file'); Now inside BLO I have one function notifiy, which call has dependency to DAO class and call getCurrentDBSnapshot function from the Dao and passes the open stream so that data gets populated into the stream. Code: Blo Class Constructor: public function __construct($file) { //Open Unica File for parsing. $this->fin = fopen($file,'w+') or die('Cannot open file'); // Initialize the Repository DAO. $this->dao = new Dao('REPO'); } Blo Class method that interacts with Dao Method and call getCurrentDBSnapshot. public function notifiy() { $data = $this->fin; var_dump($data); //resource(9) of type (stream) $outputFile=$this->dao->getCurrentDBSnapshot($data); // So basically am passing in $data which is resource((9) of type (stream) } Dao function: getCurrentDBSnapshot which get current state of Database table. public function getCurrentDBSnapshot($data) { $query = "SELECT * FROM myTable"; //Basically just preparing the query. $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); $header = array(); while ($row=$stmt->fetch(PDO::FETCH_ASSOC)) { if(empty($header)) { // Get the header values from table(columnnames) $header = array_keys($row); // Populate csv file with header information from the mytable fputcsv($data, $header); } // Export every row to a file fputcsv($data, $row); } var_dump($data);//resource(9) of type (stream) return $data; } So basically in am getting back resource(9) of type (stream) from getCurrentDBSnapshot and am storing that stream into $outputFile in Blo class method notify. Now I want to close the file which I opened for writing and so it should be fclose of $outputFile or $data, because in both the cases it gives me: var_dump(fclose($outputFile)) as bool(false) var_dump(fclose($data)) as bool(false) and var_dump($outputFile) as resource(9) of type (Unknown) var_dump($data) as resource(9) of type (Unknown) My question is that if I open file using fopen in class A and if I call class B method from Class A method and pass an open stream, in our case $data, than Class B would perform some work and return back and open stream and so How can I close that open stream in Class A's method or it is ok to keep that stream open and not use fclose ? Would appreciate inputs as am not very sure as how this can be implemented.

    Read the article

  • Installing Enclosure with Netbeans

    - by jdknewb
    Hi all, I am having trouble installing Enclosure and getting it to work. I have followed this guide http://www.enclojure.org/gettingstarted and successfully installed Enclosure (I think). However, when I try to build the sample application (labrepl) I get a bunch of errors and a failed build. I haven't used Java in a long time and I've never used Netbeans, and the error doesn't seem very helpful with my limited knowledge of this domain. I'm using the latest Netbeans and the Enclosure URL from the guide. Since I am on Windows, I can't use git to clone the repo, so I'm not sure what to do from here. Anyway, here are the error messages. WARNING: You are running embedded Maven builds, some build may fail due to incompatibilities with latest Maven release. To set Maven instance to use for building, click here. Scanning for projects... [#process-resources] [resources:resources] Using default encoding to copy filtered resources. [#compile] [ERROR]Transitive dependency resolution for scope: compile has failed for your project. [ERROR]Error message: Missing: [ERROR]---------- [ERROR]1) org.clojure:clojure-contrib:jar:1.2.0-master-SNAPSHOT [ERROR] Try downloading the file manually from the project website. [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=org.clojure -DartifactId=clojure-contrib -Dversion=1.2.0-master-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=org.clojure -DartifactId=clojure-contrib -Dversion=1.2.0-master-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] Path to dependency: [ERROR] 1) labrepl:labrepl:jar:0.0.1 [ERROR] 2) org.clojure:clojure-contrib:jar:1.2.0-master-SNAPSHOT [ERROR]---------- [ERROR]1 required artifact is missing. [ERROR]for artifact: [ERROR] labrepl:labrepl:jar:0.0.1 [ERROR]from the specified remote repositories: [ERROR] central (http://repo1.maven.org/maven2), [ERROR] clojars (http://clojars.org/repo/), [ERROR] incanter (http://repo.incanter.org), [ERROR] clojure-snapshots (http://build.clojure.org/snapshots), [ERROR] clojure (http://build.clojure.org/releases), [ERROR] clojure-releases (http://build.clojure.org/releases) [ERROR]Group-Id: labrepl [ERROR]Artifact-Id: labrepl [ERROR]Version: 0.0.1 [ERROR]From file: C:\Users\chloey\Documents\NetBeansProjects\RelevanceLabRepl\pom.xml ------------------------------------------------------------------------ For more information, run with the -e flag ------------------------------------------------------------------------ BUILD FAILED ------------------------------------------------------------------------ Total time: 1 second Finished at: Wed Jun 09 21:53:04 CDT 2010 Final Memory: 72M/172M ------------------------------------------------------------------------ Thanks all.

    Read the article

  • What facets have I missed for creating a 3 person guerilla dev team?

    - by Penguinix
    Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox]

    Read the article

  • msysgit git-am can't apply it's own git format-patch sequence

    - by Andrian Nord
    I'm using msysgit git on windows to operate on central svn repository. I'm using git as I want to have it's awesome little local branches for everything and rebasing on each other. I also need to update from central repo often, so using separate svn/git is not an option. Problem is - git svn --help (man page) says that it is not a good idea to use git merge into master branch (which is set to track from svn's trunk) from local branches, as this will ruin the party and git svn dcommit would not work anymore. I know that it's not exactly true and you may use git merge if you are merging from branch which was properly rebased on master prior merge, but I'm trying to make it safer and actually use git format-patch and git am. We are using code review, so I'm making patches anyway. I also knew about git cherry-pick, but I want to just git am /reviewed/patches/dir/* without actually recalling what commits was corresponding to this patches (without reading patches, that is). So, what's wrong with git svn and git am? It's simple - git am for a few very hard points is doing CRLF into LF conversion for patches supplied (git-mailsplit is doing this, to be precise), if not rebasing. git format-patch is also producing proper (LF-ended) patches. As my repo is mostly CRLF (and it should remain so), patches are, obviously, failing due to wrong EOL. Converting diffs to CRLF and somehow hacking git am to prevent it from conversion is not working, too. It will fail if any file was removed or deleted - git apply will complain about expected /dev/null (but he got /dev/null^M). And if I'm applying it with git am --ignore-space-change --ignore-whitespace that it will commit LF endings straight to the index, which is also weird. I don't know if it will preserve over commiting into svn (via git svn dcommit) and checking it out and I don't want to try out. Of course, it's still possible to try hacking around patches to convert only actual diffs, but this is too much hacks for simple task. So, I wonder, is there really no established way to produce patches and apply them to the same repo on the same system? It just feels weird that msysgit can't apply it's own patches.

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • Turning off ASP.Net WebForms authentication for one sub-directory

    - by Keith
    I have a large enterprise application containing both WebForms and MVC pages. It has existing authentication and authorisation settings that I don't want to change. The WebForms authentication is configured in the web.config: <authentication mode="Forms"> <forms blah... blah... blah /> </authentication> <authorization> <deny users="?" /> </authorization> Fairly standard so far. I have a REST service that is part of this big application and I want to use HTTP authentication instead for this one service. So, when a user attempts to get JSON data from the REST service it returns an HTTP 401 status and a WWW-Authenticate header. If they respond with a correctly formed HTTP Authorization response it lets them in. The problem is that WebForms overrides this at a low level - if you return 401 (Unauthorised) it overrides that with a 302 (redirection to login page). That's fine in the browser but useless for a REST service. I want to turn off the authentication setting in the web.config: <location path="rest"> <system.web> <authentication mode="None" /> <authorization><allow users="?" /></authorization> </system.web> </location> The authorisation bit works fine, but when I try to change the authentication I get an exception: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. I'm configuring this at application level though - it's in the root web.config How do I override the authentication so that all of the rest of the site uses WebForms authentication and this one directory uses none? This is similar to another question: 401 response code for json requests with ASP.NET MVC, but I'm not looking for the same solution - I don't want to just remove the WebForms authentication and add new custom code globally, there's far to much risk and work involved. I want to change just the one directory in configuration.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >