Search Results

Search found 39784 results on 1592 pages for 'ignore files'.

Page 446/1592 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • Reoccurring error "The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to

    - by tuseau
    Hi, I keep receiving this error in my ASP.NET web app (below). I give the Network Service account rights to the specified folder, it runs fine for a while, but then within a day or two the error reoccurs, as the Network Service account has been removed from the rights for the folder. Adding it again fixes it, but why does it keep reocurring? Could it be anything to do with using Interop components (such as WMI)? Here's the full error: Server Error in '/DriveMonitor' Application. The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpException (0x80004005): The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'.] System.Web.HttpRuntime.SetUpCodegenDirectory(CompilationSection compilationSection) +8918190 System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags) +152 [HttpException (0x80004005): The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8890735 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259

    Read the article

  • How can I reorder an mbox file chronologically?

    - by Joshxtothe4
    Hello, I have a single spool mbox file that was created with evolution, containing a selection of emails that I wish to print. My problem is that the emails are not placed into the mbox file chronologically. I would like to know the best way to place order the files from first to last using bash, perl or python. I would like to oder by received for files addressed to me, and sent for files sent by me. Would it perhaps be easier to use maildir files or such? The emails currently exist in the format: From [email protected] Fri Aug 12 09:34:09 2005 Message-ID: <[email protected]> Date: Fri, 12 Aug 2005 09:34:09 +0900 From: me <[email protected]> User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: someone <[email protected]> Subject: Re: (no subject) References: <[email protected]> In-Reply-To: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Status: RO X-Status: X-Keywords: X-UID: 371 X-Evolution-Source: imap://[email protected]/ X-Evolution: 00000002-0010 Hey the actual content of the email someone wrote: > lines of quotedtext I am wondering if there is a way to use this information to easily reorganize the file, perhaps with perl or such.

    Read the article

  • Eclipse - Import existing mult-rep CVS project folder

    - by iQ
    Hey guys, Wondering if anyone can help me out with eclipse in terms of importing an existing CVS managed project. I am currently trying to shift my work on to the eclipse IDE. Some details about my project and environment below. I'm working in Linux Ubuntu, the project folder is located on a mounted shared network drive, I have installed the "Eclipse CVS Client" plug-in for my version of eclipse (helios). I've tried many ways for eclipse to use my existing folder as a project and recognize the CVS data in the CVS folders. I have done the following options: Created a new project, selected existing source, located my project folder and clicked OK to finish creating. In the end the CVS files weren't automatically read. Did the same as above and after project creation I wen to the option "project menu-team-share project", it asks me to choose a repository and doesn't automatically find the CVS information in the subfolders. If your wondering I have set-up both repositories in my eclipse and can browse the repositories through the CVS browser. My project directory layout is like this: +-Project Folder (no CVS folder at this level) +---Repo A folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo A + +---Repo B folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo B + +-(couple of random files, not in CVS) Thanks for the help

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • SQL Server 2005 Import from Excel

    - by user327045
    I'd like to know what my best option would be to import data from an excel file on a weekly or monthly basis. At first, I thought I would use SSIS, but after much struggle with seemingly simple tasks, I'm starting to rethink my plan. Would it be better/easier to just write the SQL by hand or use the services of an SSIS package? The basic process will be as follows: A separate process will download an .xls file to a local fileshare. The xls file will have a filename like: 'myfilename MON YY'. I will need to read the month and year from the the filename, reformat it to a sql date and then query a DimDate table to find the corresponding date key. For each row (after the first 2 header rows), insert the data with the date key, unless the row is a total row, then ignore. Here are some of the issues I've been encountering with SSIS: I can parse the date string from a flat file datasource, but can't seem to do it with an excel data source. Also, once parsed, i cannot seem to convert the string to a date in order to perform the lookup for the date key. For example, I want to do something like this: select DateKey from DimDate where ActualDate = convert(datetime, '01-' + 'JAN-10', 120) but i don't think it is possible to use the 'convert' or 'datetime' keywords in an expression builder. I have been also unable to find where I can edit the SQL to ignore the first 2 rows of data. I'm very skeptical of using SSIS because it seems like a Kludgy way of doing something that can probably be accomplished more efficiently writing the SQL yourself, but I may be forced to use SSIS. Thoughts?

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • Unit Testing a rails 2.3.5 plugin

    - by brad
    I'm writing a new plugin for a rails 2.3.5 app. I've included an app directory (which makes it an engine) so i can easily load some extra routes. Not sure if that affects anything. Anyway, in the test directory i have two files: test_helper.rb and my_plugin_test.rb These files were generated automatically using script/generate plugin my_plugin When I go to vendor/plugins/my_plugin directory and run rake test they don't seem to run. I get the following console output: (in /Users/me/Repos/my_app/source/trunk/vendor/plugins/my_plugin) /Users/me/.rvm/rubies/jruby-1.4.0/bin/jruby -I"lib:lib:test" "/Users/me/.rvm/gems/jruby-1.4.0/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/my_plugin_test.rb" So it obviously sees my test file, but none of the tests inside get run, I just get back to my console prompt. What am I missing here? I figured the generated code would work out of the box Here are the two files test_helper.rb require 'rubygems' require 'active_support' require 'active_support/test_case' my_plugin_test.rb require 'test_helper' class MyPluginTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "Factories are supported" do assert_not_nil Factory end end File structure vendor - plugins - my_plugin - app - config - routes.rb - generators - my_plugin - some generator files.rb - lib - my_plugin.rb - my_plugin - my_plugin_lib_file.rb - rails - init.rb - Rakefile - tasks - my_plugin_tasks.rake - test - test_helper.rb - my_plugin_test.rb

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • Why is F# member not found when used in subclass

    - by James Black
    I have a base type that I want to inherit from, for all my DAO objects, but this member gets the error further down about not being defined: type BaseDAO() = member v.ExecNonQuery2(conn)(sqlStr) = let comm = new MySqlCommand(sqlStr, conn, CommandTimeout = 10) comm.ExecuteNonQuery |> ignore comm.Dispose |> ignore I inherit in this type: type CreateDatabase() = inherit BaseDAO() member private self.createDatabase(conn) = self.ExecNonQuery2 conn "DROP DATABASE IF EXISTS restaurant" This is what I see when my script runs in the interactive shell: --> Referenced 'C:\Program Files\MySQL\MySQL Connector Net 6.2.3\Assemblies\MySql.Data.dll' [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\BaseDAO.fs] namespace FSI_0106.RestaurantServiceDAO type BaseDAO = class new : unit -> BaseDAO member ExecNonQuery2 : conn:MySql.Data.MySqlClient.MySqlConnection -> sqlStr:string -> unit member execNonQuery : sqlStr:string -> unit member execQuery : sqlStr:string * selectFunc:(MySql.Data.MySqlClient.MySqlDataReader -> 'a list) -> 'a list member f : x:obj -> string member Conn : MySql.Data.MySqlClient.MySqlConnection end [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs] C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs(56,14): error FS0039: The field, constructor or member 'ExecNonQuery2' is not defined I am curious what I am doing wrong. I have tried not inheriting, and just instantiating the BaseDAO type in the function, but I get the same error. I started on this path because I had a property that had the same error, so it seems there may be a problem with how I am defining my BaseDAO type, but it compiles with no error, which further confuses me about this problem.

    Read the article

  • GAE Simple Request Handler only run once

    - by Hiro
    Good day! https://developers.google.com/appengine/docs/python/gettingstarted/helloworld this is the hello world that I'm trying to run. I can seeing the Hello, world! Status: 500 message. however it will be turned to a "HTTP Error 500" after I hit the refresh. and... it seems that the appengine only shows me the good result once after I re-save either app.yaml or helloworld.py This is the trace for the good result Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\runtime\wsgi.py", line 187, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "C:\Program Files\Google\google_appengine\google\appengine\runtime\wsgi.py", line 239, in _LoadHandler raise ImportError('%s has no attribute %s' % (handler, name)) ImportError: <module 'helloworld' from 'D:\work\[GAE] tests\helloworld\helloworld.pyc'> has no attribute app INFO 2012-06-23 01:47:28,522 dev_appserver.py:2891] "GET /hello HTTP/1.1" 200 - ERROR 2012-06-23 01:47:30,040 wsgi.py:189] and this is the trace for the Error 500 Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\runtime\wsgi.py", line 187, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "C:\Program Files\Google\google_appengine\google\appengine\runtime\wsgi.py", line 239, in _LoadHandler raise ImportError('%s has no attribute %s' % (handler, name)) ImportError: <module 'helloworld' from 'D:\work\[GAE] tests\helloworld\helloworld.pyc'> has no attribute app INFO 2012-06-23 01:47:30,127 dev_appserver.py:2891] "GET /hello HTTP/1.1" 500 - here's my helloworld.py print 'Content-Type: text/plain' print '' print 'Hello, world!' my main.py. (app is used instead of application) import webapp2 class hello(webapp2.RequestHandler): def get(self): self.response.out.write('normal hello') app = webapp2.WSGIApplication([ ('/', hello), ], debug = True) and the app.yaml application: helloworld version: 1 runtime: python27 api_version: 1 threadsafe: true handlers: - url: /favicon\.ico static_files: favicon.ico upload: favicon\.ico - url: /hello script: helloworld.app - url: /.* script: main.app libraries: - name: webapp2 version: "2.5.1" any clue what's causing this? Regards,

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Structuring input of data for localStorage

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject(files): Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Pull/Clone a svn repository into hg with new default branch name?

    - by TheLQ
    I'm forking a project's SVN repo and need to integrate into my Mercurial repo. To keep things simple I have a local hgsubversion repo and a local hg repo. However both the mercurial and hgsubversion repo uses default as their default branch name. My goal here is to put the original code and updates on one branch and my code on the default branch However I have yet to be able to do this. W:\programming\tcsite-svn-test>hg clone http://*HG_SITE*/hg . no changes found updating to branch default 0 files updated, 0 files merged, 0 files removed, 0 files unresolved W:\programming\tcsite-svn-test>hg branch blizzard marked working directory as branch blizzard W:\programming\tcsite-svn-test>hg commit W:\programming\tcsite-svn-test>hg log changeset: 0:be13a9580df0 branch: blizzard tag: tip user: Leon Blakey <[email protected]> date: Fri Jan 14 23:44:25 2011 -0500 summary: Created Blizzard Branch W:\programming\tcsite-svn-test>hg pull http://*SVN_SITE*/svn/ pulling from http://*SVN_SITE*/svn/ .... pulled 23 revisions (run 'hg update' to get a working copy) W:\programming\tcsite-svn-test>hg branch blizzard W:\programming\tcsite-svn-test>hg branches default 23:93642a8890ab <------ blizzard 0:be13a9580df0 Not surprisingly, hgsubversion puts pulled commits into the default branch when I really need them in the blizzard branch. From the docs, there is no way to rename the branch that a commit came from. Frustratingly I can't even come up with a way to do it on a repo with only the hgsubversion repo being pulled from, nothing else. All commits are tied to that one branch no matter what. Is there any suggestions on how to pull changes from an SVN repo and rename the branch to something else?

    Read the article

  • Given a main function and a cleanup function, how (canonically) do I return an exit status in Bash/Linux?

    - by Zac B
    Context: I have a bash script (a wrapper for other scripts, really), that does the following pseudocode: do a main function if the main function returns: $returncode = $? #most recent return code if the main function runs longer than a timeout: kill the main function $returncode = 140 #the semi-canonical "exceeded allowed wall clock time" status run a cleanup function if the cleanup function returns an error: #nonzero return code exit $? #exit the program with the status returned from the cleanup function else #cleanup was successful .... Question: What should happen after the last line? If the cleanup function was successful, but the main function was not, should my program return 0 (for the successful cleanup), or $returncode, which contains the (possibly nonzero and unsuccessful) return code of the main function? For a specific application, the answer would be easy: "it depends on what you need the script for." However, this is more of a general/canonical question (and if this is the wrong place for it, kill it with fire): in Bash (or Linux in general) programming, do you typically want to return the status that "means" something (i.e. $returncode) or do you ignore such subjectivities and simply return the code of the most recent function? This isn't Bash-specific: if I have a standalone executable of any kind, how, canonically should it behave in these cases? Obviously, this is somewhat debatable. Even if there is a system for these things, I'm sure that a lot of people ignore it. All the same, I'd like to know. Cheers!

    Read the article

  • How to download a file from a UNC mapped share via IIS and ASP

    - by helgeg
    I am writing an ASP application that will serve files to clients through the browser. The files are located on a file server that is available from the machine IIS is running on via a UNC path (\server\some\path). I want to use something like the code below to serve the file. Serving files that are local to the machine IIS is running on is working well with this method, my trouble is being able to serve files from the UNC mapped share: //Set the appropriate ContentType. Response.ContentType = "Application/pdf"; //Get the physical path to the file. string FilePath = MapPath("acrobat.pdf"); //Write the file directly to the HTTP content output stream. Response.WriteFile(FilePath); Response.End(); My question is how I can specify a UNC path for the file name. Also, to access the file share I need to connect with a specific username/password. I would appreciate some pointers on how I can achieve this (either using the approach above or by other means).

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Writing an audio player in C#

    - by Malki
    Hi, I have a pretty cool idea for a very special media player. I like to think about this project as a mini-startup, since I don't yet know if my idea is practical. Anyways, before implementing my idea, I first need to be able to implement a simple audio player. My preferred language for this project is C#, simply because it's so easy to use, but any other object oriented language would be fine too I guess. I started out with no knowledge whatsoever about audio. My main goals right now are: Being able to play audio files - as many formats as possible (sort of a VLC type player, but only audio for now). Being able to analyze audio files - as in, reading frequency, amplitude, volume, and other information about the audio. I think maybe a good idea here is to be able to analyze one file format (PCM?), and then temporarily converting any file I want to analyze to that format. This is in order to later implement a mechanism that compares songs and identifies similar songs to recommend to the user (this feature isn't part of my idea, but I figured since it exists in many players nowadays, I need to have it too if I want be able to compete with them). BTW - I currently don't have any knowledge about audio/wavelengths/frequencies and such, so I'd appreciate it if someone could point me in the right direction about this analyzation feature. Maybe in the future I'd expand to playing video files as well, but for now I'm concentrating on audio. After searching the Internet for a while, I've come across LAME. Problem is, it's not C#, and I'm not sure how to use it. I know there is something called "Interoperability", that is supposed to let me work with native DLL files through C#. Any information about that would be helpful as well. Any help would be much appreciated. Thanks, Malki :)

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >