Search Results

Search found 3166 results on 127 pages for 'git p4'.

Page 78/127 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Two-page view in Word, shouldn't the first page be on the right?

    - by Cylindric
    Greetings Superusers, I'm putting together a lengthy document in Word, and it's going to be printed and bound duplex. I've put page-numbers "outside" etc, and all is pretty. The problem is, in the "Two Pages" view, it puts p1 on the left, then p2 on the right, then p3 below on the left, and p4 on the right. p1 p2 p3 p4 p5 p6 Shouldn't this be slightly different though? When I get to print it, p1 is on the right, not the left, so the preview should go p1 p2 p3 p4 p5 p6 Because when I "open" the book, it's pages 2 and 3 that are side-by-side. This makes layout tweaking confusing, because it's not instantly obvious which pages will be "visible" to the reader at the same time together. Have I missed something? I can't just put a blank page first, because that would bugger up the printing, as the printer automatically duplexes and binds etc. (Office 2008, by the way)

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Integrating different branches from external sources into a single Mercurial repository

    - by dukeofgaming
    I'm currently working in a company using Perforce and am making way for distributed version control with Mercurial. I've had success importing Perforce history using the perfarce (quite a suitable name, I laugh every time I see/say it) however, this only works with a single branch at a time. Here's how my P4 integration setup works: In perforce, create a "client", which is kind of a description of what you will be constantly updating/checking-out. This can only address one branch at a time (trunk or other). Once you do this, run hg clone p4://<server>/<client_name> Go to .hg/hgrc and put the perforce path line: perforce = p4://<server>/<client_name> Work normally with the code under mercurial, do hg pull perforce to sync up, hg push to export a changelist What I'd like to be able to do is have a perforce path per branch and have everything work in the same repository. Now, pushing is not a problem, however, if I pull the history from another branch it would end up at the default branch. I'd like to be able to do something like hg pull perforce-R5 and have it land in mercurial's R5 branch. Even if I have no merging history, it would be sweet enough to be able to preserve it. There are also other plugins for CVCSs that let you integrate mercurial, but AFAIK the subversion one has the same problem. I don't think there is a straight-through way of doing this, but as long as I could automate the process with some hooks and scripts in a single Mercurial machine, that would be good enough.

    Read the article

  • To install Markdown's extensions by Python

    - by Masi
    The installation notes (git://gitorious.org/python-markdown/mainline.git) say in the file using_as_module.txt One of the parameters that you can pass is a list of Extensions. Extensions must be available as python modules either within the markdown.extensions package or on your PYTHONPATH with names starting with mdx_, followed by the name of the extension. Thus, extensions=['footnotes'] will first look for the module markdown.extensions.footnotes, then a module named mdx_footnotes. See the documentation specific to the extension you are using for help in specifying configuration settings for that extension. I put the folder "extensions" to ~/bin/python/ such that my PYTHONPATH is the following export PYTHONPATH=/Users/masi/bin/python/:/opt/local/Library/Frameworks/Python.framework/Versions/2.6/ The instructions say that I need to import the addons such that import markdown import <module-name> However, I cannot see any module in my Python. This suggests me that the extensions are not available as "python modules - - on [my] PYTHONPATH with names starting with mdx_ - -." How can you get Markdown's extensions to work? 2nd attempt I run at ~/bin/markdown git clone git://gitorious.org/python-markdown/mainline.git python-markdown cd python-markdown python setup.py install I put the folder /Users/masi/bin/markdown/python-markdown/build to my PATH because the installation message suggests me that is the new location of the extensions. I have the following in a test markdown -document [TOC] -- headings here with # -format --- However, I do not get the table of contents. This suggests me that we need to somehow activate the extensions when we compile by the markdown.py -script. **The problem returns to my first quoted text which I is rather confusing to me.

    Read the article

  • Why can't “knife data bag from file” find existing json file on chef server?

    - by ellisera
    Summary: I'm running into a problem with "knife data bag from file", where knife doesn't recognize the .json data bag file pulled down from a remote git repo. Background: I'm currently trying to transition from chef-solo use to chef server while using the cookbooks, data bags and other chef info from our remote git repo. I've currently pulled down a copy of our git repo and set the cookbook path and data bag path in knife.rb. I also loaded the cookbooks, made adjustments, etc. Details: When trying to load our .json data bags by doing "knife data bag add from file FOLDER FILE" it looks like it worked until I do "knife data bag list" and it comes up blank. So I decided to try adding the edit option at the end to see what's being loaded, if it is. This is the error I get: knife data bag from file local_settings test.json -e nano ERROR: Could not find or open file 'test.json' in current directory or in 'data_bags/local_settings/test.json' The data bag file does exist, in the proper location, in a tested, working json file. I've also sometimes gotten an error saying "could not open data bag "local_settings". I would obviously like to keep the data bag path within the appropriate git repo folder to be able to keep track of changes in a more centralized location (our git repo, as opposed to the chef server). Any solutions, advice or pointers in the right direction are appreciated.

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Macos default paths prepend my defined paths in vim

    - by Bogdan Gusiev
    I am trying to call some shell command from vim with like :!ls command. But unfortunately there are some default PATHS that prepends PATHs defined in the original shell. Here is the echo $PATH output in the original shell: /usr/local/heroku/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194@global/bin:/Users/bogdan/.rvm/rubies/ruby-1.9.3-p194/bin:/Users/bogdan/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin:/Users/bogdan/.rvm/bin:/Users/bogdan/bin:/Users/bogdan/.rvm/bin:/usr/local/Cellar/git/1.7.12.2/libexec/git-core:/Users/bogdan/.rvm/bin and shell called within vim: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194@devauc/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194@global/bin:/Users/bogdan/.rvm/rubies/ruby-1.9.3-p194/bin:/Users/bogdan/.rvm/bin:/Users/bogdan/bin:/usr/local/Cellar/git/1.7.12.2/libexec/git-core:/Users/bogdan/.rvm/bin Why they appeared right there? How can I prevent that and make vim shell has original PATH variable.

    Read the article

  • Symlinks are inaccessible by their full path on OS X

    - by Computer Guru
    Hi, I have symlinks pointing to applications placed in /usr/local/bin which is in the path. However, I can't run these applications from other folders. Even more weird, I can't access them by the full path to the symlink. [mqudsi@iqudsi:Desktop/EasyBCD]$ echo $path (03-26 13:42) /opt/local/bin /opt/local/sbin /usr/local/bin /usr/local/sbin/ /usr/local/CrossPack-AVR/bin /usr/bin /bin /usr/sbin /sbin /usr/local/bin /usr/X11/bin [mqudsi@iqudsi:local/bin]$ ls -l /usr/local/bin (03-26 13:47) total 24280 -rwxr-xr-x 1 mqudsi wheel 18464 May 14 2009 ascii-xfr -rwxr-xr-x 1 mqudsi wheel 12567 Mar 25 04:50 brew -rwxr-xr-x 1 mqudsi wheel 17768 Dec 11 12:41 bsdiff -rwxr-xr-x 1 mqudsi wheel 43024 Mar 28 2009 dumpsexp -rwxr-xr-x 1 mqudsi wheel 280 Sep 10 2009 easy_install -rwxr-xr-x 1 mqudsi wheel 288 Sep 10 2009 easy_install-2.6 -rwxr-xr-x 1 mqudsi wheel 39696 Apr 5 2009 fuse_wait lrwxr-xr-x 1 mqudsi wheel 29 Mar 25 04:53 git -> ../Cellar/git/1.7.0.3/bin/git [mqudsi@iqudsi:local/bin]$ /usr/local/bin/git (03-26 13:47) zsh: no such file or directory: /usr/local/bin/git Clearly the link is there, but I'm not able to get it to it :S

    Read the article

  • multiple ssh aliases is selecting wrong user when forwarding

    - by Chris Beck
    I'm following the dual identity procedure for bitbucket: I have 2 bitbucket accounts ccmcbeck and chrisbeck. The former is personal, the latter is work. On my local Mac, I have this in my ~/.ssh/config Host *.work.com User chris ForwardAgent yes IdentityFile ~/.ssh/work_dsa Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa On my local Mac I ssh -T all is good, I get: $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as chrisbeck. On my local Mac, the ssh version is OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 When I ssh foo.work.com to my Linux box, I get: $ ssh-add -l 1024 ... /Users/chris/.ssh/work_dsa (DSA) 2048 ... /Users/chris/.ssh/bitbucket_ccmcbeck_rsa (RSA) 2048 ... /Users/chris/.ssh/bitbucket_chrisbeck_rsa (RSA) On foo.work.com, I also have this in my ~/.ssh/config Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa However, on foo.work.com when I ssh -T, it references the wrong User for git@bitbucket-work $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as ccmcbeck. On foo.work.com, the ssh version is OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 Why is my configuration causing foo.work.com to reference the wrong User?

    Read the article

  • Installing Gitolite on NAS - FATAL: have errors but logging failed

    - by Jay
    I'm trying to install Gitolite on my Synology DiskStation, following these instructions: http://www.bluevariant.com/2012/05/comprehensive-guide-git-gitolite-synology-diskstation/ Under "Install Gitolite on the DiskStation and Run Setup" When I run the gitolite install command: DiskStation> /volume1/homes/git/gitolite/install -ln /bin I get the following error: FATAL: have errors but logging failed! 2012-05-31.00:10:22 no GL_LOGFILE env var 2012-05-31.00:10:22 die could not symlink /volume1/home/git/gitolite/src/gitolite to /bin<<newline>> at /volume1/home/git/gitolite/install line 71<<newline>> I am very new to all of this. Does anyone know what this means, and how I can fix this error? Thank you very much.

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • which command run in cron returns nothing

    - by Zárate
    Hi there, I've written a little utility in haXe + Neko that needs to execute some GIT commands. To avoid hardcoding the path to the GIT executable I'd like to use the which command to find out where it is. Everything works as expected when running manually from the console, but not when the the app runs on a cron job. I'm aware of the restricted environment (here or here) when you run a script using cron, but still surprised this doesn't work: /usr/bin/which git >> /home/user/git.txt The text file is created but the content is empty. Again, when run from the console it works as expected. Any ideas? I'm running OS X Leopard, if that helps. Thanks : ) Juan

    Read the article

  • The 'which' command returns nothing via cron, but works via console

    - by Zárate
    Hi there, I've written a little utility in haXe + Neko that needs to execute some GIT commands. To avoid hardcoding the path to the GIT executable I'd like to use the which command to find out where it is. Everything works as expected when running manually from the console, but not when the the app runs on a cron job. I'm aware of the restricted environment (here or here) when you run a script using cron, but still surprised this doesn't work: /usr/bin/which git >> /home/user/git.txt The text file is created but the content is empty. Again, when run from the console it works as expected. Any ideas? I'm running OS X Leopard, if that helps. Thanks : ) Juan

    Read the article

  • Using Arch Linux computer as a server for Rack Apps

    - by wxl
    What would be the best way to go about using an Arch Linux computer as a Rack (as in Ruby Rack, not an actual rack server) server? Here's what I want to be able to do: Automatically deploy on a git push to the server. (I already have this worked out, on post-receive the server checks out the app to /home/git/app from /home/git/app.git.) Run a Rack server application to serve up this app, one that can be restarted on demand. Run a MongoDB server Be able to access the app by going to my-server.local/app or something similar. (It's really only going to be used on the local network, no port forwarding or outside use) Any ideas would be greatly appreciated. I apologize if this seems too "do it for me".

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Non-restored Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Is it OK to have a diferent hostname to fqdn?

    - by Cameron Ball
    I have a server with fqdn git.mydomain.com (this is in DNS) but I don't really want the machine to have git as its hostname. Right now I have the hostname in /etc/hostname set as (for example): mycustomhostname And in /etc/hosts I have 1.2.3.4 git.mydomain.com mycustomhostname (Where 1.2.3.4 is my server's IP) I've read that the first component of the FQDN should always be the unqualified hostname, so is what I'm doing bad? If so, what is the correct way to do what I want?

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

  • Does (should?) changing the URI scheme name change the semantics?

    - by Doug
    If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the same resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as "git@{authority}/{path}" , however the library we're using to interface with them doesn't support the git protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: "We don't know if that's the same repository". My response was: "It had better be!" Looking at RFC 3986, I see this excerpt: URI "resolution" is the process of determining an access mechanism and the appropriate parameters necessary to dereference a URI; this resolution may require several iterations. To use that access mechanism to perform an action on the URI's resource is to "dereference" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: Although many URI schemes are named after protocols, this does not imply that use of these URIs will result in access to the resource via the named protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: it is possible for a single set of hypertext documents to be simultaneously accessible and traversable via each of the "file", "http", and "ftp" schemes if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the Normalization and Comparison section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?

    Read the article

  • VCS for single user using file sync service

    - by StackUnder
    I'm trying to setup a version control for my one man project. My project files are in sync thanks to live mesh (but I could be using dropbox for that matter), between my laptop, my home pc and my office pc. I'm now using Netbeans with local file history. Sometimes it helps to revert to a previous state of one file. But imagine a situation when multiple files have problems. Correct me if I'm wrong but I would have to go to every file and revert to previous "safe" state. I don't like this approach, so I'm considering using a version control between SVN and GIT. I have some previous experience with SVN (TortoiseSVN) and I know that I can create a file:// repo. So, what a want to do is setup a VCS inside my synced folder just to have the ability to "revert" to a previous version if something goes wrong. Since everything's been synced to all computers, I wouldn't ever need to run an update. The file tree organization would be the following: C:...\SyncedFolder\MyProject\ Inside MyProject folder are all the project files plus a directory that has SVN or GIT info of my project (the repo/master). What VCS is best for this situation: SVN or GIT? Does SVN need to store all files from HEAD revision, thus "duplicating" all my project inside my synced folder? Does GIT eliminates this problem? Is this the best approach?

    Read the article

  • Unable to use factory girl with Cucumber and rails 3 (bundler problem)

    - by jbpros
    Hi there, I'm trying to run cucumber features with factory girl factories on a fresh Rails 3 application. Here is my Gemfile: source "http://gemcutter.org" gem "rails", "3.0.0.beta" gem "pg" gem "factory_girl", :git => "git://github.com/thoughtbot/factory_girl.git", :branch => "rails3" gem "rspec-rails", ">= 2.0.0.beta.4" gem "capybara" gem "database_cleaner" gem "cucumber-rails", :require => false Then the bundle install commande just runs smoothly: $ bundle install /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/installer.rb:81:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement Updating git://github.com/thoughtbot/factory_girl.git Fetching source index from http://gemcutter.org Resolving dependencies Installing abstract (1.0.0) from system gems Installing actionmailer (3.0.0.beta) from system gems Installing actionpack (3.0.0.beta) from system gems Installing activemodel (3.0.0.beta) from system gems Installing activerecord (3.0.0.beta) from system gems Installing activeresource (3.0.0.beta) from system gems Installing activesupport (3.0.0.beta) from system gems Installing arel (0.2.1) from system gems Installing builder (2.1.2) from system gems Installing bundler (0.9.13) from system gems Installing capybara (0.3.6) from system gems Installing cucumber (0.6.3) from system gems Installing cucumber-rails (0.3.0) from system gems Installing culerity (0.2.9) from system gems Installing database_cleaner (0.5.0) from system gems Installing diff-lcs (1.1.2) from system gems Installing erubis (2.6.5) from system gems Installing factory_girl (1.2.3) from git://github.com/thoughtbot/factory_girl.git (at rails3) Installing ffi (0.6.3) from system gems Installing i18n (0.3.6) from system gems Installing json_pure (1.2.3) from system gems Installing mail (2.1.3) from system gems Installing memcache-client (1.7.8) from system gems Installing mime-types (1.16) from system gems Installing nokogiri (1.4.1) from system gems Installing pg (0.9.0) from system gems Installing polyglot (0.3.0) from system gems Installing rack (1.1.0) from system gems Installing rack-mount (0.4.7) from system gems Installing rack-test (0.5.3) from system gems Installing rails (3.0.0.beta) from system gems Installing railties (3.0.0.beta) from system gems Installing rake (0.8.7) from system gems Installing rspec (2.0.0.beta.4) from system gems Installing rspec-core (2.0.0.beta.4) from system gems Installing rspec-expectations (2.0.0.beta.4) from system gems Installing rspec-mocks (2.0.0.beta.4) from system gems Installing rspec-rails (2.0.0.beta.4) from system gems Installing selenium-webdriver (0.0.17) from system gems Installing term-ansicolor (1.0.5) from system gems Installing text-format (1.0.0) from system gems Installing text-hyphen (1.0.0) from system gems Installing thor (0.13.4) from system gems Installing treetop (1.4.4) from system gems Installing tzinfo (0.3.17) from system gems Installing webrat (0.7.0) from system gems Your bundle is complete! When I run cucumber, here is the error I get: $ rake cucumber (in /home/jbpros/projects/deorbitburn) /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/resolver.rb:97:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement NOTICE: CREATE TABLE will create implicit sequence "posts_id_seq" for serial column "posts.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "posts_pkey" for table "posts" /usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/lib:lib" "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber" --profile default Using the default profile... git://github.com/thoughtbot/factory_girl.git (at rails3) is not checked out. Please run `bundle install` (Bundler::PathError) /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:282:in `load_spec_files' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:190:in `local_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:36:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `each' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:34:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:14:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:13:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:55:in `resolve_locally' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:28:in `specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:65:in `specs_for' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:23:in `requested_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/runtime.rb:18:in `setup' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler.rb:68:in `setup' /home/jbpros/projects/deorbitburn/config/boot.rb:7 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/application.rb:1 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/environment.rb:2 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/features/support/env.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:85:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:77:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `each' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:48:in `execute!' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:20:in `execute' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber:8 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1....] (See full trace by running task with --trace) Do I have to do something special for bundler to check out factory girl's repository on github?

    Read the article

  • Is there a major downside to using .htaccess files in your svn/git repository?

    - by Rob
    If our .htaccess files are purely for mod rewrites, is there a security / development downside to committing .htaccess files alongside other files in your repository? For various reasons (our SEO optimisers like to add pretty urls as new promotions occur, etc) we need a fair few rewrite rules inside these files. Would I be better off pushing the routing into php-land and dealing with it there? Or is reading from a .htaccess via apache fine? The .htaccess files are not exposed via the web server, so that's not a security risk.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >