Search Results

Search found 7864 results on 315 pages for 'pre commit hook'.

Page 72/315 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • ignore ipv6 router advertisements for static addresses with bonded interfaces

    - by boran
    I need to attribute static IPv6 addresses (not use autoconfigured addresses, and ignore router advertisements). This can be done as follows for a standard interface like eth0 iface eth0 inet6 static address myprefix:mysubnet::myip gateway myprefix:mysubnet::mygatewayip netmask 64 pre-up /sbin/sysctl -q -w net.ipv6.conf.$IFACE.autoconf=0 pre-up /sbin/sysctl -q -w net.ipv6.conf.$IFACE.accept_ra=0 However, how can this be done for bonded interfaces? using the "all" interface does not work. Systems is Ubuntu 10.04, 2.6.24-24-server. If one uses the above sysctl command for the bond0, the networking hangs on boot, because /proc/sys/net/ipv6/conf/bond0 does not yet exist and cannot be written to. One the system has booted /proc/sys/net/ipv6/conf/bond0 exist, so one solution after booting is to add the following to /etc/rc.local: /sbin/sysctl -q -w net.ipv6.conf.bond0.autoconf=0 /sbin/sysctl -q -w net.ipv6.conf.bond0.accept_ra=0 /etc/init.d/networking restart and this has the desired effect, the autoconfig v6 address disappears. Seems like a bit of a hack though, are there better solutions?

    Read the article

  • Shell Script Sequencing with Rake

    - by Haseeb Khan
    Hi All, I am working on a rake utility and want to implement something mentioned below: There are some shell commands in a sequence in my Rake file. What I want is that the sequence should wait for the previous command to finish processing before it moves to the next one. sh "git commit -m \"#{args.commit_message}\"" do |ok, res| # Do some processing end sh "git push heroku master" So, in the above example what I want is that sh "git push heroku master" shouldn't be executed until the processing in the sh "git commit -m \"#{args.commit_message}\"" do |ok, res| # Do some processing end is completed. Also another nice to have would be that if I can store the output of the shell command in a Ruby variable so it can be used in further manipulation if required. Looking forward to a reply from the fellow community member shortly. Thanks in advance.

    Read the article

  • Implementing Variable Envelope Return Path (VERP) using Exchange

    - by iammichael
    We're looking into implementing Variable Envelope Return Path (VERP) for improved bounce processing for our application. Our current mail infrastructure is MS Exchange 2007 but are in the process of upgrading to 2010. We're also implementing Postini for spam filtering. Exchange doesn't support sub-addressing (see also this question on disposable addresses) -- and VERP is somewhat of a specialized application of sub-addressing. Are there any options for implementing VERP in Exchange without putting another non-Exchange SMTP relay in front of Exchange to pre-process incoming messages? Specifically could a transport rule be created that could match against the target (non-existing) recipient, store that recipient address in a special header added to the message, and redirect the message to a pre-created mailbox? Note: we have developer resources available if custom code could be used somehow.

    Read the article

  • Universal Windows XP With Ghost and Sysprep

    - by RobertPitt
    Hey I have an idea but im not sure whether it is possible and looking for advice on how to accomplish this. If i was to try and deploy Windows XP Sysprepped image from a Lenovo Thinkcentre 6073-CTO and tried to drop it onto a Dell Optiplex for instance I would get a BSOD due to the hardware change. What im looking to do is create a few images like so: Windows XP x86 SP3 (Sysprepped) Windows XP x64 SP3 (Sysprepped) ... So that we can use throughout the organization without having to create individual images per computer type. Is there anyway's to accomplish this such as some modification of files pre-ghost, removal a certain drivers pre-sysprep Any advice is appreciated.

    Read the article

  • close fails on database connections (managed connection cleanup fails) in websphere 7 but not in web

    - by mete
    I have a simple method (used in a web application through servlets) that gets a connection from a JNDI name and issues a select statement (get connection, issue select, return result, close the connection etc. in finally). Due to other methods in the application the connection is set as autocommit=false. This method works normally in websphere 6.1 as well as in glassfish and weblogic. However, in websphere 7, it receives cleanup failed error when I close the connection because, it says, the connection is still in a transaction. Because I was not updating anything I did not commit or rollback the connection in this method (which can be wrong). If I add commit before closing the connection, it works. My question is why it works in websphere 6.1 (and other containers) and why not in websphere 7 ? What can be the cause of this difference ?

    Read the article

  • Multi-level clones with Git?

    - by Chad Johnson
    So, I'm thinking of having the following centralized setup with Git (each of these are clones): stable development developer1 developer2 developer3 So, I created my stable repository git --bare init made the 'development' clone git clone ssh://host.name//path/to/stable/project.git development and made a 'developer' clone git clone ssh://host.name//path/to/development/project.git developer So, now, I make a change, commit, and then I push from my developer account git commit --all git push and the change goes to the development clone. But now, when I ssh to the server, go to the development clone directory, and run "git fetch" or "get pull", I don't see the changes. So what do I do? Am I totally misunderstanding things and doing things wrong? How can I see the changes in the 'development' clone that I pushed from my 'developer' clone? This worked fine in Mercurial.

    Read the article

  • FMDB transaction

    - by user142764
    Hi ! I use FMDB to wrap SQLite in my app. I haven't found any docs about the use of methods begin, beginUpdates, commit, finalize, etc. I face some problems in my apps which i think are caused by the way i use transactions. Here is what i tried : [FMDB beginUpdates] - My insert statement - [FMDB commit] [FMDB finalize] it crashes with this log : Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[FMDatabase<0xd705a0 finalize]: called when collecting not enabled' Could you please give me an example of how you are using transactions, or point me to a doc ? Thanks in advance, Vincent.

    Read the article

  • Update DataBase on clicking in button, after editing gridview (not automatically saving in DB, but a

    - by gaponte69
    0 vote down star I am using GridView in asp .net and editing data with edit command field property (as we know after updating the edited row, we automatically update the database), and I want to use transactions (with begin to commit statement - including rollback) to commit this update query in database, after clicking in some button (after some events for example), not automatically to insert or update the edited data from grid directly to the DB...so I want to save them somewhere temporary (even many edited rows - not just one row) and then to confirm the transaction - to update the real tables in database... Any suggestions are welcomed... I've used some good links, but very helpful, like: http://www.asp.net/learn/data-access/tutorial-63-cs.aspx http://www.asp.net/learn/data-access/tutorial-66-cs.aspx etc... etc...

    Read the article

  • Eclipse 3.5 and Ubuntu 9.10, subversion client does not work

    - by Cédric Girard
    Hi, I had installed Eclipse 3.5 Yoxos on my Ubuntu 8.04 for month, and run fine. I had upgraded to 9.10 last week, and the subversion plugin does not work since upgrade. When I try to update or commit, Subversion work for hours without any progress in console or progress bars. I can delete files or add them to SVN, but commands wich involve network just hang. SVN run fine using command line. I have already patched the GDK problem. Since this I can cancel update/commit without crashing Eclipse. Regards Cédric

    Read the article

  • switch between two cursors based on parameter passed into stored procedure

    - by db83
    Hi, I have two cursors in my procedure that only differ on the table name that they join to. The cursor that is used is determined by a parameter passed into the procedure if (param = 'A') then DECLARE CURSOR myCursor IS SELECT x,y,z FROM table1 a, table2 b BEGIN FOR aRecord in myCursor LOOP proc2(aRecord.x, aRecord.y, aRecord.z); END LOOP; COMMIT; END; elsif (param = 'B') then DECLARE CURSOR myCursor IS SELECT x,y,z FROM table1 a, table3 b -- different table BEGIN FOR aRecord in myCursor LOOP proc2(aRecord.x, aRecord.y, aRecord.z); END LOOP; COMMIT; END; end if I don't want to repeat the code for the sake of one different table. Any suggestions on how to improve this? Thanks in advance

    Read the article

  • Why does git remember changes, but not let me stage them?

    - by Andres Jaan Tack
    I have a list of modifications when I run git status, but I cannot stage them or commit them. How can I fix this? This occurred after pulling the kernelmode directory from a bare repository somewhere in one huge commit. % git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: kernelmode/linux-2.6.33/Documentation/IO-mapping.txt # ... $ git add . $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: kernelmode/linux-2.6.33/Documentation/IO-mapping.txt # ...

    Read the article

  • Catch all exceptions in Scala 2.8 RC1

    - by Michel Krämer
    I have the following dummy Scala code in the file test.scala: class Transaction { def begin() {} def commit() {} def rollback() {} } object Test extends Application { def doSomething() {} val t = new Transaction() t.begin() try { doSomething() t.commit() } catch { case _ => t.rollback() } } If I compile this on Scala 2.8 RC1 with scalac -Xstrict-warnings test.scala I'll get the following warning: test.scala:16: warning: catch clause swallows everything: not advised. case _ => t.rollback() ^ one warning found So, if catch-all expressions are not advised, how am I supposed to implement such a pattern instead? And apart from that why are such expressions not advised anyhow?

    Read the article

  • Kubunutu/Windows 7 dual-boot and git

    - by Andu
    I've been using Kubuntu and Windows 7 on my laptop for some time. Recently I also started using git to keep track of a project I'm working on. At first I thought I'd use the same git repo for editing from both Kubuntu and Win, but I soon discovered that committing changes on Win would make git on Kubuntu think all the files have changed since the last commit, although the change doesn't seem to be content related. The exactly same thing happens if I do a commit on Kubuntu and right after that do a git status on Win. I know I could use different repos for Kubuntu and Win and just merge them together when I'm done, but if anyone knows how I could use the same repo I would really appreciate the help.

    Read the article

  • Cherrypicking versus Rebasing

    - by Lakshman Prasad
    The following is a scenario I commonly face: You have a set of commits on master or design, that I want to put on top of production branch. I tend to create a new branch with the base as production cherry-pick these commits on it and merge it to production Then when I merge master to production, I face merge conflicts because even tho the changes are same, but are registered as a different commit because of cherry-pick. I have found some workarounds to deal with this, all of which are laborious and can be termed "hacks". Altho' I haven't done too much rebasing, I believe that too creates a new commit hash. Should I be using rebasing where I am cherrypicking. What other advantages does that have over this.

    Read the article

  • AnkhSVN, mysisgit and Pageant

    - by Chalkey
    I have recently installed msysgit on my machine (its running Windows 7) to use Git for some projects. A lot of my projects are under SVN, in which I use AnkhSVN in Visual Studio 2008 to commit etc. Since I have installed msysgit everytime I try to commit, update etc inside Visual Studio, the program C:\msysgit\bin\ssh.exe loads up, asks for my password, then Ankh throws an exception. I currently use Pageant to save my login credentials for SVN - I have TortoiseSVN installed, which is still working fine... Has anybody got any suggestions to get Anhk working again - without uninstalling msysgit? Thanks

    Read the article

  • Git strange behaviour

    - by pocoa
    git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: readme.txt # modified: requirements.txt # no changes added to commit (use "git add" and/or "git commit -a") I didn't make any changes on those files. But I'm getting this message even if I try: git checkout -- readme.txt git checkout -- requirements.txt When I run: git diff it shows the whole file as updated. But the contents are the same. I tried to delete them and checkout again, but it didn't work.

    Read the article

  • Is there an ftp plugin for gedit that will let me work locally?

    - by RobertWHurst
    I'm trying to switch from a windows environment to Linux. I'm primarily PHP developer, but I do know quite a bit about other languages such as CSS, XHTML and Javascript. I need a way of editing my files locally because I work in a git repository and need to commit my saves. On windows I used Aptana and PDT. I'd save my files, upload via Aptana, then commit my work with git. I need to get a work flow going on my Linux machine now. If you know a better way to do this let me know, however my real question is, is there a plugin that allows gedit to upload files instead of working remotely?

    Read the article

  • Baseline / Benchmark Physical and virtual server performance

    - by EyeonTech
    I am setting up a new server and there are some options. I want to perform some benchmarks and I need your help in determining the best tools and if possible run pre-configured benchmarks designed for SQL servers on Windows Server 2008/2012. Step 1. Run a performance monitor on the current Live SQL server (Windows Server 2008 Virtual machine running on ESXi. New server Hardware rundown: Intel® Server System R1304BTLSHBN - 1U Rack, LGA1155 http://ark.intel.com/products/53559/Intel-Server-System-R1304BTLSHBN Intel Xeon E3-1270V2 2x Intel SSD 330 Series 240GB 2.5in SATA 6Gb/s 25nm 1x WD 2TB WD2002FAEX 2TB 64M SATA3 CAVIAR BLACK 4x 8GB 1333MHz DDR3 ECC CL9 DIMM There are several options for configurations and I want to benchmark some of them and share the results. Option 1. Configure 2x SSDs at RAID 0. Install Windows Server 2008 directly to the 2TB WD Caviar HDD. Store Database files on the RAID 0 Volume. Benchmark the OS direct on the hardware as an SQL Server. Store SQL Backup databases on the 2TB WD Caviar HDD. Option 2. Configure 2x SSDs at RAID 0. Install Windows Server 2012 directly to the 2TB WD Caviar HDD. Install Hyper-V. Install the SQL Server (Server 2008) as a virtual machine. Store the Virtual Hard Disks on the SSDs. Option 3. Configure 2x SSDs at RAID 0. Install VMWare ESXi on a partition of the 2TB WD Caviar HDD. Install the SQL Server (Server 2008) as a virtual machine. Store the Virtual Hard Disks on the SSDs. I have a few tools in mind from http://technet.microsoft.com/en-us/library/cc768530(v=bts.10).aspx. Any tools with pre-configured test would be fantastic. Specifically if there are pre-configured perfmon sets avaliable. Any opinions on the setup to gain the best results is welcome. Thanks in advance.

    Read the article

  • SVN Mac oSX issue - permissions?

    - by Steve Griff
    Hello there, /Volumes/sites is a connection to a samba share that hosts some of our sites. We authorise using a username & password that is the same user/pass to log onto the mac. When committing, (or even doing a cleanup) from the Mac Client side using the svn command line tool or SCPlugin, this error occurs: Commit succeeded, but other errors follow: Error bumping revisions post-commit (details follow): In directory '/Volumes/sites/foobar/public_html' Error processing command 'committed' in '/Volumes/sites/foobar/public_html' Error replacing text-base of 'index.php' Can't move '/Volumes/sites/foobar/public_html/.svn/tmp/text-base/index.php.svn-base' to '/Volumes/sites/foobar/public_html/.svn/text-base/index.php.svn-base': Operation not permitted Any ideas? I think it's to do with permissions on the mac side not being able to move files around on the samba share. Apologies if my question is kinda vague so any extra information I can give please shout. Regards Steve

    Read the article

  • git rebase branch with all subbranches

    - by knittl
    is it possible to rebase a branch with all it's subbranches in git? i often use branches as quick/mutable tags to mark certain commits. * master * * featureA-finished * * origin/master now i want to rebase -i master onto origin/master, to change/reword the commit featureA-finished^ after git rebase -i --onto origin/master origin/master master, i basically want the history to be: * master * * featureA-finished * (changed/reworded) * origin/master but what i get is: * master * * (same changeset as featureA-finished) * (changed/reworded) | * featureA-finished |.* (original commit i wanted to edit) * origin/master is there a way around it, or am i stuck with recreating the branches on the new rebased commits?

    Read the article

  • In MS SQL Server, is there a way to "atomically" increment a column being used as a counter?

    - by Dan P
    Assuming a Read Committed Snapshot transaction isolation setting, is the following statement "atomic" in the sense that you won't ever "lose" a concurrent increment? update mytable set counter = counter + 1 I would assume that in the general case, where this update statement is part of a larger transaction, that it wouldn't be. For example, I think this scenario is possible: update the counter within transaction #1 do some other stuff in transaction #1 update the counter with transaction #2 commit transaction #2 commit transaction #1 In this situation, wouldn't the counter end up only being incremented by 1? Does it make a difference if that is the only statement in a transaction? How does a site like stackoverflow handle this for its question view counter? Or is the possibility of "losing" some increments just considered acceptable?

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • How to tag and goes to a tag in hg

    - by michael
    Hi, From here, it said 'hg tag 1.0' is to get my hg repository to a tag name. http://wiki.pylonshq.com/display/pylonscookbook/Mercurial+for+Subversion+Users How can I switch my repository to that tag name? $ hg tag myTag1.0 $ $ hg commit -m "a message" $ hg how to go back to that tag? and if I make a new 'hg commit' here, what will happen? Will it goes to the branch of myTag1.0? or it will stay with default branch? Thank you.

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >