Search Results

Search found 6735 results on 270 pages for 'pre commit'.

Page 139/270 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Difference between Revert and Update in Mercurial

    - by Edan Maor
    I'm just getting started with Mercurial, and I've come across something which I don't understand. I made changes to several files, and now I want to undo all the changes I made to one of them (i.e. go back to my last commit for one specific file). As far as I can see, the command I want is revert. In the page I linked to, there is the following statement: This operation however does not change the parent revision of the working directory (or revisions in case of an uncommitted merge). To undo an uncomitted merge, you can use "hg update -C -r." which will reset the parents to the first parent. I don't understand the difference between the two (hg revert vs. hg update -C -r). Can anyone enlighten me as to the difference? And in my case, do I really want the revert or the update to go get rid of the changes I made to the file? Thanks

    Read the article

  • How do I git reset --hard HEAD on Mercurial?

    - by obvio171
    I'm a Git user trying to use Mercurial. Here's what happened: I did a hg backout on a changeset I wanted to revert. That created a new head, so hg instructed me to merge (back to "default", I assume). After the merge, it told me I still had to commit. Then I noticed something I did wrong when resolving a conflict in the merge, and decided I wanted to have everything as before the hg backout, that is, I want this uncommited merge to go away. On Git this uncommited stuff would be in the index and I'd just do a git reset --hard HEAD to wipe it out but, from what I've read, the index doesn't exist on Mercurial. So how do I back out from this?

    Read the article

  • Timesync on HyperV with CentOS 6.2

    - by WaldenL
    I've got a CentOS VM (release 6.2) running under HyperV. I have integration services installed (part of base now), and CentOS shows the current clocksource is hyperv_clocksource, however my time in the VM is about 10 minutes fast after a week of uptime. My understanding of the new IC and plugable clocksource is that this shouldn't happen any more. Is there any additional configuration necessary to get the plugable clocksource to "work?" I know there are plenty of links about setting kernel options to PIT and various stuff like that, but those all seem to pre-date the integrated clocksource support, and as I understand it shouldn't be needed anylonger. Nor should ntpd nor adjtimex.

    Read the article

  • NHibernate transaction management in ASP.NET MVC - how should it be done?

    - by adrin
    I am writing a simple ASP.NET MVC using session per request and transaction per request patterns (custom HttpModule). It seems to work properly, but.. the performance is terrible (a simple page loads ~7 seconds). For every http request, graphical resources incuding (all images on the site) a transaction is created and that seems to delay the loading times (without the transactions loading times per one image are ~1-10 ms with transactions they are over 1 second). What is the proper way to manage transactions in ASP.NET MVC + NH stack? When i've put all transactions into my repository methods, for some obscure reasons I got 'implicit transactions' warning in NHProf (the SQL statements were executed outside transaction, even that in code session.Save()/Update()/etc methods were invoked within transaction 'using' scope and before transaction.Commit() call) BTW are implicit transactions really bad?

    Read the article

  • Problem with autocommit in ANT SQL task

    - by Alex Stamper
    I have an SQL script and want to apply it witn ANT task. This script clears out schema, creates new tables and views. The ANT defined task as follows: <sql driver="com.mysql.jdbc.Driver" url="jdbc:mysql://host:3306/smth" userid="smth" password="smth" expandProperties="false" autocommit="true" src="all.sql" > </sql> When this task launches, it shows in log that tables are cleared and created. But when it tries to create first view, it fails with: Failed to execute: CREATE VIEW component... AS SELECT component_raw.id AS MySQLSyntaxErrorException: Table 'component_raw' doesn't exist I have no idea why it fails here. Running this all.sql from MySQL query browser gives no errors. When I launched ANT with -v option, I didn't see any "COMMIT" messages.. Please, help to resolve the problem.

    Read the article

  • IDbTransaction Rollback Timeout

    - by Ben
    I am dealing with an interesting situation where I perform many database updates in a single transaction. If these updates fail for any reason, the transaction is rolled-back. IDbTransaction transaction try { transaction = connection.BeginTransaction(); // do lots of updates (where at least one fails) transaction.Commit(); } catch { transaction.Rollback(); // results in a timeout exception } finally { connection.Dispose(); } I believe the above code is generally considered the standard template for performing database updates within a transaction. The issue I am facing is that whilst transaction.Rollback() is being issued to SQL Server, it is also timing out on the client. Is there anyway of distinguishing between a timeout to issue the rollback command and a timeout on that command executing to completion? Thanks in advance, Ben

    Read the article

  • Version control and branching when using Oracle

    - by Ed Woodcock
    Hi folks: At work we're using Oracle and C#/ASP.net to handle a customer's website, this site is very large-scale so the database is very large. We use Perforce for our version control, and tack create or replace scripts to FogBugz cases whenever a database change, which has been fine until now, as we are now at a point where five developers are working on five expansions for the system, each on a seperate Perforce branch. Unfortunately, we cannot get duplicate databases, due to the database size, so everyone is still working from the same one. This is obviously a cause of problems: only ten minutes ago we had a bit of an issue where a stored procedure change for a branch propagated over to the Pre-Production server and caused a large number of crashes for the testers. Ideally, we would like a way to track these changes without having to manually keep track of them through FogBugz. My question is: how do you lot handle this situation? I'm sure there must be a good way by now to handle versioning, or at least tracking changes, in an Oracle database.

    Read the article

  • TooManyRowsAffectedException with encrypted triggers

    - by Jon Masters
    I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them. Is there another way to get around the TooManyRowsAffectedException that is thrown on commit? Update 1 So far only way I've gotten around the issue is to step around the .Save routine with var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order"); query.SetString("Notes", orderHeader.Notes); query.SetString("Status", orderHeader.OrderStatus); query.SetInt32("Order", orderHeader.OrderHeaderId); query.ExecuteUpdate(); It feels dirty and is not easily to extend, but it doesn't crater.

    Read the article

  • Corporate IM with video that actually works, suggestions?

    - by Erik P. Skaalerud
    Hi. Does anyone here have a suggestion for a cross-platform IM solution wich will work with voip/video on both Windows (XP and 7) and Mac OS X from 10.4 and upwards? Right now were in a kind of mixed enviroment, with some Mac users using iChat server since they need video support (conference across several offices over VPN), but it wont't work on windows clients. The rest of us are happily using Openfire+Spark, but there's no VoIP or video avaible from what i've found, unless you want to add in several 3rd party software (like red5 and asterisk). Requirements: As said before; must work on both Windows and Mac Internal server (no Skype etc) File transfer between platforms SSO (Single Sign-On) via Active Directory authentication Some sort of screen sharing would be a plus, like switching over to a screen capture (powerpoint, software training etc) We can afford to buy software if that's needed to get this working without any hiccups across platforms. Pre-thanks to anyone who gives suggestions.

    Read the article

  • Open working copy file from eclipse history view

    - by Wolfgang
    The history view of eclipse shows you a list of files changed in a certain revision. When you open the context menu on one of these you have the option 'Open' which opens a view of that file in that revision. How can I open the editor for the selected file, i.e. the file in the version of the working copy, right from the history view? Background is that I want to use the history view to find files that have been changed recently to do code reviewing. People commit via subversion and I use subclipse to connect eclipse to the subversion server. Today, I must use the 'Open resource'/'Open type' function and type the name of the file that I can read from the history view.

    Read the article

  • best data+partition recovery software

    - by Pennf0lio
    I accidentally formatted my Drive D that contained all my Backups and Documents. I separated my files to my Drive D hoping I will not harm my files. Since I use Acronis Recovery to Install a new OS with some pre-installed application to my HDD I didn't realize I also formated/erase my Drive D. Now my drive D is unpartitioned. I am really in really in deep trouble and would need some urgent help, Please recommend a Software that at least can restore my Old Drive that contained my files. I'm assuming most of you think this is a duplicate of some old questions here, But I'm not looking for data recovery, I need to recover the whole partition with the files. I used to use "Recuva" but It only recovers files not the whole folders with the files in it. Please advice. Thank You!

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • SVN Import Force for importing existing file

    - by Daniel A. White
    I am creating a text file and a zip file for a tag automatically with MSBuild. My msbuild project is called by cruisecontrol.net. The text file is always going to be latest.txt and the zip file will be (version).zip (so it will be different every time). I do not want to commit these files back to my trunk, so I discovered svn import. On the first time, it works for both. On successive runs, it fails since latest.txt already exists in the repository. Do I need to use svn import --force or something else to get these two files pushed up to my repository?

    Read the article

  • Is there a difference SMO ServerConnection transaction methods versus using the SqlConnectionObject

    - by YWE
    I am using SMO to create databases and tables on a SQL Server. I want to do so in a transaction. Are both of these methods of doing so valid and equivalent: First method: Server server; //... server.ConnectionContext.BeginTransaction(); //... server.ConnectionContext.CommitTransaction(); Second method: Server server; // ... SqlConnection conn = server.ConnectionContext.SqlConnectionObject; SqlTransaction trans = conn.BeginTransaction(); // ... trans.Commit();

    Read the article

  • Mercurial: pull changes from unversioned copy

    - by Austin Hyde
    I am currently maintaining a Mercurial repository of the project I am working on. The rest of the team, however, doesn't. There is a "good" (unversioned) copy of the code base that I can access by SSH. What I would like to do is be able to do something like an hg pull from that good copy into my master repository whenever it gets updated. As far as I can tell, there's no obvious way to do this, as hg pull requires you have a source hg repository. I suppose I could use a utility like rsync to update my repository, then commit, but I was wondering: Is there was an easier/less contrived way to do this?

    Read the article

  • Nhibernate not tracking changes

    - by Lavinski
    I'm using nhibernate and it appears that changes to my new object are not persisted. After creating and saving an object I then modify it and commit the transaction. However none of the modifications are saved. The strange thing is this code was working previously and I have no idea what could cause this. Nothing was changed that's obviously related.. As an attempted work around I saved the object later in the procedure after all the changes where made however I was greeted with an Assertion Failure collection [] was not processed by flush. Any ideas?

    Read the article

  • How can I fix my Virtual PC 2007 network configuration

    - by DanJ
    Hi, I have installed Windows Virtual PC 2007 on my Windows 2003 R2 Server. I have installed a virtual Windows XP. I have configured the virtual PC to use Shared Networking (NAT) I have disabled the firewall on the virtual windows XP The problem: I am unable to PING from Windows 2003 (the host) to the Windows XP (virtual) I do have normal traffic from the virtual to the internet Could this problem be related to routing? How I can I fix this network configuration to allow for the following traffic: 1. From Virtual to Internet 2. From Host to Virtual 3. If possible, from Internet to Virtual on pre-defined ports (port forwarding?) Thanks

    Read the article

  • How do you share your git repository with other developers?

    - by semi
    I have a central git repository that everyone pushes to for testing and integration, but it only is pushed to when features are 'ready'. While in the middle of a big task, developers frequently have many commits that stay on their harddrives. Sometimes in the middle of these projects I'd like to either see what another developer is doing, or show him how I've done something. I'd like to be able to tell another developer to just "pull my working copy" The only way I can think of is having everyone run ssh on their development machines and adding accounts or ssh keys for everyone, but this is a huge privacy and permissions nightmare, and seems like a lot of work to maintain. Should we just be pushing to that central repository in these cases? Should we be pushing after every local commit?

    Read the article

  • How to properly update a feature branch from trunk?

    - by Pavel Radzivilovsky
    SVN book says: ...Another way of thinking about this pattern is that your weekly sync of trunk to branch is analogous to running svn update in a working copy, while the final merge step is analogous to running svn commit from a working copy I find this approach very unpractical in large developments, for several reasons, mostly related to reintegration step. From SVN v1.5, merging is done rev-by-rev. Cherry-picking the areas to be merged would cause us to resolve the trunk-branch conflicts twice (one when merging trunk revisions to the FB, and once more when merging back). Repository size: trunk changes might be significant for a large code base, and copying the differences files (unlike SVN copy) from trunk elsewhere may be a significant overhead. Instead, we do what we call "re-branching". In this case, when a significant chunk of trunk changes is needed, a new feature branch is opened from current trunk, and the merge is always downward (Feature branches - trunk - stable branches). This does not go along SVN book guidelines and developers see it as extra pain. How do you handle this situation?

    Read the article

  • SQLite.Net Issue With BeginTransaction

    - by cam
    I'm trying to use System.Data.Sqlite library, and I'm following the documentation about optimizing inserts so I copied this code directly out of the documentation: using (SQLiteTransaction mytransaction = myconnection.BeginTransaction()) { using (SQLiteCommand mycommand = new SQLiteCommand(myconnection)) { SQLiteParameter myparam = new SQLiteParameter(); int n; mycommand.CommandText = "INSERT INTO [MyTable] ([MyId]) VALUES(?)"; mycommand.Parameters.Add(myparam); for (n = 0; n < 100000; n ++) { myparam.Value = n + 1; mycommand.ExecuteNonQuery(); } } mytransaction.Commit(); } Now, I initialize I connection right before that by using SqlConnection myconnection = new SqlConnection("Data Source=blah"); I have a Database named blah, with the correct tables and values. The problem is when I run this code, it says "Operation is not valid due to the current state of the object" I've tried changing the code around several times, and it still points to BeginTransaction. What gives?

    Read the article

  • JPA, scope, and autosave?

    - by arinte
    I am using JPA and lets say I do something like this public class MoRun extends Thread {... public void run() { final EntityManagerFactory emFactory = Persistence.createEntityManagerFactory("pu"); EntityManager manager = emFactory.createEntityManager(); manager.setFlushMode(FlushModeType.COMMIT); someMethod(manager); ... } public void someMethod(EntityManager manager){ Query query = manager.createNamedQuery("byStates"); List<State> list = query.getResultList(); for (State state : list) { if(someTest) state.setValue(...) } ... } So for those objects that pass "someTest" and values are updated are those changes automatically persisted to the db even though there is no transaction and I don't explicitly "manager.save(state)" the object? I ask because it seems like it is and I was wondering if the flush is doing it?

    Read the article

  • uploading zip files in codeigniter won't work

    - by krike
    I have created a helper that requires some parameters and should upload a file, the function works for images however not for zip files. I searched on google and even added a MY_upload.php - http://codeigniter.com/bug_tracker/bug/6780/ however I still have the problem so I used print_r to display the array of the uploaded files, the image is fine however the zip array is empty: Array ( [file_name] => [file_type] => [file_path] => [full_path] => [raw_name] => [orig_name] => [file_ext] => [file_size] => [is_image] => [image_width] => [image_height] => [image_type] => [image_size_str] => ) Array ( [file_name] => 2385b959279b5e3cd451fee54273512c.png [file_type] => image/png [file_path] => I:/wamp/www/e-commerce/sources/images/ [full_path] => I:/wamp/www/e-commerce/sources/images/2385b959279b5e3cd451fee54273512c.png [raw_name] => 2385b959279b5e3cd451fee54273512c [orig_name] => 1269770869_Art_Artdesigner.lv_.png [file_ext] => .png [file_size] => 15.43 [is_image] => 1 [image_width] => 113 [image_height] => 128 [image_type] => png [image_size_str] => width="113" height="128" ) this is the function helper function multiple_upload($name = 'userfile', $upload_dir = 'sources/images/', $allowed_types = 'gif|jpg|jpeg|jpe|png', $size) { $CI =& get_instance(); $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = $allowed_types; $config['max_size'] = $size; $config['overwrite'] = FALSE; $config['encrypt_name'] = TRUE; $ffiles = $CI->upload->data(); echo "<pre>"; print_r($ffiles); echo "</pre>"; $CI->upload->initialize($config); $errors = FALSE; if(!$CI->upload->do_upload($name))://I believe this is causing the problem but I'm new to codeigniter so no idea where to look for errors $errors = TRUE; else: // Build a file array from all uploaded files $files = $CI->upload->data(); endif; // There was errors, we have to delete the uploaded files if($errors): @unlink($files['full_path']); return false; else: return $files; endif; }//end of multiple_upload() and this is the code in my controller if(!$s_thumb = multiple_upload('small_thumb', 'sources/images/', 'gif|jpg|jpeg|jpe|png', 1024)): //http://www.matisse.net/bitcalc/ $data['feedback'] = '<div class="error">Could not upload the small thumbnail!</div>'; $error = TRUE; endif; if(!$main_file = multiple_upload('main_file', 'sources/items/', 'zip', 307200)): $data['feedback'] = '<div class="error">Could not upload the main file!</div>'; $error = TRUE; endif;

    Read the article

  • Is there a way to create a consistent snapshot/SnapMirror across multiple volumes?

    - by Tomer Gabel
    We use a NetApp FAS 6-series filer with an application that spans multiple volumes. For backup purposes I would like to create a consistent snapshot that spans these volumes at the same point in time (or at least with an extremely low delta); additionally, we'd like to to use SnapMirror to replace the production environment to test volumes. The problem is in creating a consistent snapshot/SnapMirror, since these commands are not transactional and do not take multiple parameters. I tried scripting consecutive "snap create" or "snapmirror resync" commands via SSH, but there's always a 0.5-2 second difference between each snapshot. It's currently "good enough", but I'm seriously concerned about the consistency impact with increased load (we're currently in pre-production). Has anyone managed to create a consistent snapshot that spans several volumes? How did you pull it off?

    Read the article

  • how to call sql string from nhibernate

    - by frosty
    i have the following method, at the moment it's return the whole sql string. How would i execute the following. using (ITransaction transaction = session.BeginTransaction()) { string sql = string.Format( @"DECLARE @Cost money SET @Cost = -1 select @Cost = MAX(Cost) from item_costings where Item_ID = {0} and {1} >= Qty1 and {1} <= Qty2 RETURN (@Cost)", itemId, quantity); string mystring = session .CreateSQLQuery(sql) .ToString(); transaction.Commit(); return mystring; } // EDIT here is the final version using criteria using (ISession session = NHibernateHelper.OpenSession()) { decimal cost = session .CreateCriteria(typeof (ItemCosting)) .SetProjection(Projections.Max("Cost")) .Add(Restrictions.Eq("ItemId", itemId)) .Add(Restrictions.Le("Qty1", quantity)) .Add(Restrictions.Ge("Qty2", quantity)) .UniqueResult<decimal>(); return cost; }

    Read the article

  • Programming Environment that runs directly on the iPhone or iPad?

    - by lexu
    Is there a development environment that runs directly on an the iPhone OS? I will be without access to a computer/internet for some time, but will have the use of an iPad (WiFi, jailbroken). Do you know of any way to dabble with programming directly on the device. Since apple is commit to not let it happen, I assume I will have to find such an environment on cydia (or a 'similar' site). I don't seem to be able to find the correct google incantations (search terms) to locate such a package.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >