Search Results

Search found 6455 results on 259 pages for 'master james'.

Page 121/259 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • TeamCity and pending Git merge branch commit keeps build with failed tests

    - by Vladimir
    We use TeamCity for continuous integration and Git for source control. Generally it works pretty well - convenient, modern and good us quick feedback when tests fails. There is a strange behavior related to Git merge specifics. Here are steps of the case: First developer pulls from master repo. Second developer pulls from master repo. First developer makes commit A locally. Second developer makes commit B locally; Second developer pushes commit B. First developer want to push commit A but unable because he have to pull commit B first. First developer pull's from remote reposity. First developer pushes commit A and generated merge branch commit. The history of commits in master repo is following: B second developer A first developer merge branch first developer. Now let's assume that Second Developer fixed some failing tests in his commit B. What TeamCity will do is following: Commit B arrives - TeamCity makes build #1 with all tests passed Commit A arrives - TeamCity makes build #2 (without commit B) test bar becomes Red! TeamCity thought that Pending "Merge Branch" commit doesn't contain any changes (any new files) - but it actually does contain the merge of commit B, so the TeamCity don't want to make new build here and make tests green. Here are two problems: 1. In our case we have failed tests returning back in second commit (commit A) 2. TeamCity don't want to make a new build and make tests back green. Does anybody know how to fix both of this problems. I consider some reasonable general approach.

    Read the article

  • shoulda macros with rspec2 beta 5 and rails3 beta2

    - by Millisami
    I've setup Rspec2 beta5 and shoulda as following to use shoulda macros inside rspec model tests. Gemfile group :test do gem "rspec", ">= 2.0.0.beta.4" gem "rspec-rails", ">= 2.0.0.beta.4" gem 'shoulda', :git => 'git://github.com/bmaddy/ shoulda.git' gem "faker" gem "machinist" gem "pickle", :git => 'git://github.com/codegram/ pickle.git' gem 'capybara', :git => 'git://github.com/jnicklas/ capybara.git' gem 'database_cleaner', :git => 'git://github.com/bmabey/ database_cleaner.git' gem 'cucumber-rails', :git => 'git://github.com/aslakhellesoy/ cucumber-rails.git' end *spec_helper.rb* Dir["#{File.dirname(__FILE__)}/support/**/*.rb"].each {|f| require f} require 'shoulda' Rspec.configure do |config| *spec/models/outlet_spec.rb* require 'spec_helper' describe Outlet do it { should validate_presence_of(:name) } end And when I run the spec, I get the following error. [~/rails_apps/rails3_apps/automation (master)?] ? spec spec/models/ outlet_spec.rb DEPRECATION WARNING: RAILS_ROOT is deprecated! Use Rails.root instead. (called from join at /home/millisami/.rvm/gems/ruby-1.9.1-p378%rails3/ bundler/gems/shoulda-87e75311f83548760114cd4188afa4f83fecdc22-master/ lib/shoulda/autoload_macros.rb:40) F 1) Outlet Failure/Error: it { should validate_presence_of(:name) } undefined method `validate_presence_of' for #<Rspec::Core::ExampleGroup::Nested_1:0xc4dc138 @__memoized={}> # ./spec/models/outlet_spec.rb:4:in `block (2 levels) in <top (required)>' Finished in 0.0399 seconds 1 example, 1 failures [~/rails_apps/rails3_apps/automation (master)?] ? Why the "undefined method" ?? Is the shoulda getting loaded?

    Read the article

  • How to set a define inside other define

    - by João Madureira Pires
    Hi all! I'm developing a web application in jboss, seam, richfaces. I'm using a template(xhtml) as master page of all others and there i set two insert tags. <ui:insert name="head"/> <ui:insert name="body"/> The problem is that in pages that use this master page as template, the <ui:define name="head">...</ui:define> must be defined inside the <ui:define name="body">...</ui:define>. How can i do this? Basically, what i want is to do the following: <ui:define name="body">... <ui:define name="head"> <meta name="title" content="#{something.title}" /> </ui:define> ...</ui:define> the master page must return : <meta name="title" content="#{something.title}" /> on the <ui:insert name="head"/> Thanks in advance

    Read the article

  • Pushing to bare Git repository (remote) causes it to stop being bare

    - by NSD
    I have a local repository called TestRepo. I clone it with the --bare option, zip this clone up, and throw it on my server. Unzip it, and it's still bare. I then clone the bare remote repository locally over ssh with something like git clone ssh://[email protected]/~/TestRepo.git TestRepoCloned The local TestRepoCloned is not bare and has a remote called "origin." It appears to be tracking correctly from the looks of its config file [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = ssh://[email protected]/~/TestRepo.git [branch "master"] remote = origin merge = refs/heads/master I edit an existing file. I commit the change to the current branch (master) via git commit -a -m "Edited a file." The commit succeeds and all is well. I decide to push this change to the remote repository via SSH with a git push The remote repository is now no longer bare, but has a complete working directory, and I get continuous error messages on all further attempts to push to it. Everything I've read seems to suggest that what I'm doing is correct, but it simply is not working. How am I supposed to push changes to a bare remote repo and actually keep it bare?

    Read the article

  • git workflow incorporating many, but not all commits from many forks

    - by becomingGuru
    I have a git repo. It has been forked several times and many independent commits are made on top of it. Everything normal, like what happens in many github hosted projects. Now, what exact workflow should I follow, if I want to see all that commits individually and apply the ones I like. The workflow I followed, which is not the optimal is to create a branch of the name github-username and merge the changes into my master and undo any changes in the commit I dont need manually (there are not many, so it worked). What I want is the ability to see all commits from different forks individually and cherry pick and apply them on top of my master. What is the workflow to follow for that? And what gui (gitk?) enables me to see all different individual commits. I realize that merge should be a primary part of the workflow and not cherry-pick as it creates a different commit (from git's point of view). Even rebasing other's changes on top of mine might not preserve the history on the graph to indicate that it is his commits I have rebased. So then, How do I ignore just a few commits from a lot of them? I think github should have a "apply this commit on top of my master" thing in their graph after each commit node; so I can just pull it, after doing all that.

    Read the article

  • PL/SQL - How to pull data from 3 tables based on latest created date

    - by Nancy
    Hello, I'm hoping someone can help me as I've been stuck on this problem for a few days now. Basically I'm trying to pull data from 3 tables in Oracle: 1) Orders Table 2) Vendor Table and 3) Master Data Table. Here's what the 3 tables look like: Table 1: BIZ_DOC2 (Orders table) OBJECTID (Unique key) UNIQUE_DOC_NAME (Document Name i.e. ORD-005) CREATED_AT (Date the order was created) Table 2: UDEF_VENDOR (Vendors Table): PARENT_OBJECT_ID (This matches up to the ObjectId in the Orders table) VENDOR_OBJECT_NAME (This is the name of the vendor i.e. Acme) Table 3: BIZ_UNIT (Master Data table) PARENT_OBJECT_ID (This matches up to the ObjectID in the Orders table) BIZ_UNIT_OBJECT_NAME (This is the name of the business unit i.e. widget A, widget B) Note: The Vendors Table and Master Data do not have a link between them except through the Orders table. I can join all of the data from the tables and it looks something like this: Before selecting latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 ORD-004 | Widget C | Acme | 3/10/10 Ideally I'd like to return the latest order for each vendor. However, each order may contain multiple business units (e.g. types of widgets) so if a Vendor's latest record is ORD-005 and the order contains 2 business units, here's what the result set should look like by the following columns: UNIQUE_DOC_NAME, BIZ_UNIT_OBJECT_NAME, VENDOR_OBJECT_NAME, CREATED_AT After selecting by latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 I tried using Select Max and several variations of sub-queries but I just can't seem to get it working. Any help would be hugely appreciated!

    Read the article

  • git merge with renamed files

    - by Kevin
    I have a large website that I am moving into a new framework and in the process adding git. The current site doesn't have any version control on it. I started by copying the site into a new git repository. I made a new branch and made all of the changes that were needed to make it work with the new framework. One of those steps was changing the file extension of all of the pages. Now in the time that I have been working on the new site changes have been made to files on the old site. So I switched to master and copied all of those changes in. The problem is when I merge the branch with the new framework back onto master there is a conflict on every file that was changed on the master branch. I wouldn't be to worried about it but there are a couple of hundred files with changes. I have tried git rebase and git rebase --merge with no luck. How can I merge these 2 branches without dealing with every file?

    Read the article

  • Heroku push rejected, failed to install gems via Bundler

    - by ismaelsow
    Hi everybody ! I am struggling to push my code to Heroku. And after searching on Google and Stack Overflow questions, I have not been able to find the solution. Here is What I get when I try "git push heroku master" : Heroku receiving push -----> Rails app detected -----> Detected Rails is not set to serve static_assets Installing rails3_serve_static_assets... done -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Fetching source index for http://rubygems.org/ /usr/ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:300:in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/mail-2.2.6.001.gemspec.rz) (Gem::RemoteFetcher::FetchError) from /usr/ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:172:in `fetch_path' . .... And finally: FAILED: http://docs.heroku.com/bundler ! Heroku push rejected, failed to install gems via Bundler error: hooks/pre-receive exited with error code 1 To [email protected]:myapp.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:myapp.git' Thanks for your help!

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • System.out.println() does not operate in Akka actor

    - by faisal abdulai
    I am kind of baffled by this encointer. I am working an akka project that was created as a maven projecct and imported into eclipse using the mvn eclipse:eclipse command. the akka actor has the system println method just to make it easy to do read the functions and methods invoked. However any time I run the akka system, the println command does not print any thing to the eclipse console but I do not get any error messages. does any one have any idea about this. below is a code snippet. public class MasterActor extends UntypedActor { /** * */ ActorSystem system = ActorSystem.create("container"); ActorRef worker1; //public MasterActor(){} @Override public void onReceive(Object message) throws Exception { System.out.println(" Master Actor 5"); if(message instanceof GesturePoints) { //GesturePoints gp = (GesturePoints) message; System.out.println(" Master Actor 1"); try { worker1.tell(message, getSelf()); System.out.println(" Master Actor 2"); } catch (Exception e) { getSender().tell(new akka.actor.Status.Failure(e), getSelf()); throw e; } } else{ unhandled(message);} } public void preStart() { worker1 = getContext().actorFor("akka://[email protected]:2553/user/workerActor"); } } don't know whether it is a bug in eclipse. thank you.

    Read the article

  • Can I rename LOCAL, REMOTE and BASE as used in git mergetool?

    - by carleeto
    Lets say I'm doing a rebase B of a branch onto master and there's a conflict. git opens up the default merge tool with 3 files as input : file.LOCAL, file.BASE, file.REMOTE (they're named a little differently, but LOCAL, BASE and REMOTE are in the file names and is how they are distinguished). Now, according to the mergetool man page: $LOCAL is set to the name of a temporary file containing the contents of the file on the current branch; $REMOTE set to the name of a temporary file containing the contents of the file to be merged, and $BASE set to the name of a temporary file containing the common base for the merge. That really does not make sense to me. LOCAL is the current state of the branch. Where I get lost is BASE and REMOTE. So my question is : Is it possible to make git use the branch name instead of LOCAL and similarly more meaningful names other than BASE and REMOTE? For example, if the branch name is FeatureX and the BASE = the file as it exists in master, is there a way to get git to substitute FeatureX for LOCAL and master for BASE, so that it is more apparent where the source is coming from? This is especially a problem when doing a rebase.

    Read the article

  • Microsoft Reporting: Setting subreport parameters in code

    - by Svish
    How can I set a parameter of a sub-report? I have successfully hooked myself up to the SubreportProcessing event, I can find the correct sub-report through e.ReportPath, and I can add datasources through e.DataSources.Add. But I find no way of adding report parameters?? I have found people suggesting to add them to the master report, but I don't really want to do that, since the master report shouldn't have to be connected to the sub-report at all, other than that it is wrapping the sub-report. I am using one report as a master template, printing name of the report, page numbers etc. And the subreport is going to be the report itself. And if I could only find a way to set those report parameters of the sub-report I would be good to go... Clarification: Creating/Defining the parameters is not the problem. The problem is to set their values. I thought the natural thing to do was to do it in the SubreportProcessing event. And the SubreportProcessingEventArgs do in fact have a Parameters property. But it is read only! So how do you use that? How can I set their value?

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Importing hibernate configuration file into Spring applicationContext

    - by Himanshu Yadav
    I am trying to integrate Hibernate 3 with Spring 3.1.0. The problem is that application is not able to find mapping file which declared in hibernate.cfg.xml file. Initially hibernate configuration has datasource configuration, hibernate properties and mapping hbm.xml files. Master hibernate.cfg.xml file exist in src folder. this is how Master file looks: <hibernate-configuration> <session-factory> <!-- Mappings --> <mapping resource="com/test/class1.hbm.xml"/> <mapping resource="/class2.hbm.xml"/> <mapping resource="com/test/class3.hbm.xml"/> <mapping resource="com/test/class4.hbm.xml"/> <mapping resource="com/test/class5.hbm.xml"/> Spring config is: <bean id="sessionFactoryEditSolution" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="data1"/> <property name="mappingResources"> <list> <value>/master.hibernate.cfg.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> </props> </property> </bean>

    Read the article

  • asp:login form does not submit when you hit enter

    - by Ben Liyanage
    I am having an issues while using the <asp:login> tag. When a user clicks the "login" button, the form will process correctly. However, when the user hits the enter key, the form self submits and does not process the login, whether it was correct information or not. I am using a combination of MasterPages, and Umbraco. My aspx code looks like this: <%@ Master Language="C#" MasterPageFile="/masterpages/AccountCenter.master" CodeFile="~/masterpages/Login.master.cs" Inherits="LoginPage" AutoEventWireup="true" %> <asp:Content ContentPlaceHolderID="RunwayMasterContentPlaceHolder" runat="server"> <div class="loginBox"> <div class="AspNet-Login-TitlePanel">Account Center Login</div> <asp:label id="output" runat="server"></asp:label> <asp:GridView runat="server" id="GridResults" AutoGenerateColumns="true"></asp:GridView> <asp:Login destinationpageurl="~/dashboard.aspx" ID="Login1" OnLoggedIn="onLogin" runat="server" TitleText="" FailureText="The login/password combination you provided is invalid." DisplayRememberMe="false"></asp:Login> </div> </asp:Content> In the actual rendered page, I see this javascript on the form: <form method="post" action="/dashboard.aspx?" onsubmit="javascript:return WebForm_OnSubmit();" id="aspnetForm"> That javascript function is defined as: <script type="text/javascript"> //<![CDATA[ function WebForm_OnSubmit() { if (typeof(ValidatorOnSubmit) == "function" && ValidatorOnSubmit() == false) return false; return true; } //]]> </script> The javascript is always evaluating to True when it runs.

    Read the article

  • specifying multiple URLs with cURL/PHP using square brackets

    - by Raj Gundu
    I have a large array of URLS similar to this: $nodes = array( 'http://www.example.com/product.php?page=1&sortOn=sellprice', 'http://www.example.com/product.php?page=2&sortOn=sellprice', 'http://www.example.com/product.php?page=3&sortOn=sellprice' ); The cURL manual states here (http://curl.haxx.se/docs/manpage.html) that i can use square brackets '[]' to specify multiple urls. Used in the above example this would be similar to this: 'http://www.example.com/product.php?page=[1-3]&sortOn=sellprice' So far i have been unable to reference this correctly. This is the complete code segment I'm currently trying to utilize this with: $nodes = array( 'http://www.example.com/product.php?page=1&sortOn=sellprice', 'http://www.example.com/product.php?page=2&sortOn=sellprice', 'http://www.example.com/product.php?page=3&sortOn=sellprice' ); $node_count = count($nodes); $curl_arr = array(); $master = curl_multi_init(); for($i = 0; $i < $node_count; $i++) { $url =$nodes[$i]; $curl_arr[$i] = curl_init($url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, true); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); echo "results: "; for($i = 0; $i < $node_count; $i++) { $results = curl_multi_getcontent ( $curl_arr[$i] ); echo( $i . "\n" . $results . "\n"); echo 'done'; I can't seem to find any more documentation on this. Thanks in advance.

    Read the article

  • Did anyone have this issue with a simple Facebook app or know how to solve it?

    - by Jian Lin
    I have a really simple few lines of Facebook app, using the new Facebook API: <pre> <?php require 'facebook.php'; // Create our Application instance. $facebook = new Facebook(array( 'appId' => '117676584930569', 'secret' => '**********', // hidden here on the post... 'cookie' => true, )); var_dump($facebook); ?> but it is giving me the following output: http://apps.facebook.com/woolaladev/i2.php would give out object(Facebook)#1 (6) { ["appId:protected"]=> string(15) "117676584930569" ["apiSecret:protected"]=> string(32) "**********" <--- just hidden on this post ["session:protected"]=> NULL ["sessionLoaded:protected"]=> bool(false) ["cookieSupport:protected"]=> bool(true) ["baseDomain:protected"]=> string(0) "" } Session is NULL for some reason, but I am logged in and can access my home and profile and run other apps on Facebook (to see that I am logged on). I am following the sample on: http://github.com/facebook/php-sdk/blob/master/examples/example.php http://github.com/facebook/php-sdk/blob/master/src/facebook.php (download using raw URL: wget http://github.com/facebook/php-sdk/raw/master/src/facebook.php ) Trying on both hosting companies at dreamhost.com and netfirms.com, and the results are the same.

    Read the article

  • jQuery toggling div visibility

    - by Eef
    I have a HTML document with the below setup: <div class="main-div" style="padding: 5px; border: 1px solid green;"> <div class="first-div" style="width: 200px; height: 200px; padding: 5px; border: 1px solid purple"> First Div <a href="#" class="control">Control</a> </div> <div class="second-div hidden" style="width: 200px; height: 200px; padding: 5px; border: 1px solid red;"> Second Div <a href="#" class="control">Control</a> </div> </div> I also have a CSS class setup called hidden with display setup to none. I have jQuery setup like so: $('.control').click(function(){ var master = $(this).parent().parent(); var first_div = $(master).find(".first-div"); var second_div = $(master).find(".second-div"); $(first_div).toggleClass("hidden") $(second_div).toggleClass("hidden") }); This setup toggles the visibility of the divs, click the control button it hides one div and show the other. However this just hides and shows each div in a flash. I am looking to add some animation to the transitioning of the divs, maybe have one slide up and the other slide down when the 'control' is clicked and vice versa but I am unable to achieve this. Could anyone help out and give some advice on how to do this? Cheer Eef

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • How to append to a log file in powershell?

    - by Mark Allison
    Hi there, I am doing some parallel SQL Server 2005 database restores in powershell. The way I have done it is to use cmd.exe and start so that powershell doesn't wait for it to complete. What I need to do is to pipe the output into a log file with append. If I use Add-Content, then powershell waits, which is not what I want. My code snippet is foreach ($line in $database_list) { <snip> # Create logins sqlcmd.exe -S $instance -E -d master -i $loginsFile -o $logFile # Read commands from a temp file and execute them in parallel with sqlcmd.exe cmd.exe /c start "Restoring $database" /D"$pwd" sqlcmd.exe -S $instance -E -d master -i $tempSQLFile -t 0 -o $logFile [void]$logFiles.Add($logFile) } The problem is that sqlcmd.exe -o overwrites. I've tried doing this to append: cmd.exe /c start "Restoring $database" /D"$pwd" sqlcmd.exe -S $instance -E -d master -i $tempSQLFile -t 0 >> $logFile But it doesn't work because the output stays in the SQLCMD window and doesn't go to the file. Any suggestions? Thanks, Mark.

    Read the article

  • git rebase without changing commit timestamps

    - by Olivier
    Would it make sense to perform git rebase while preserving the commit timestamps? I believe a consequence would be that the new branch will not necessarily have commit dates chronologically. Is that theoretically possible at all? (e.g. using plumbing commands; just curious here) If it is theoretically possible, then is it possible in practice with rebase, not to change the timestamps? For example, assume I have the following tree: master <jun 2010> | : : : oldbranch <feb 1984> : / oldcommit <jan 1984> Now, if I rebase oldbranch on master, the date of the commit changes from feb 1984 to jun 2010. Is it possible to change that behaviour so that the commit timestamp is not changed? In the end I would thus obtain: oldbranch <feb 1984> / master <jun 2010> | : Would that make sense at all? Is it even allowed in git to have a history where an old commit has a more recent commit as a parent? Edit A crucial question of Von C helped me understand what is going on: when your rebase, the committer's timestamp changes, but not the author's timestamp, which suddenly all makes sense. So my question was actually not precise enough. The answer is that rebase actually doesn't change the author's timestamps (you don't need to do anything for that), which suits me perfectly.

    Read the article

  • How do I know which include path will be used in PHP?

    - by Joe Majewski
    When I run phpinfo() and look by the Configuration category under PHP Core, I see a directive titled include_path, with a local value and a master value. In this case, my local value is set to .: ./include: ../include: /usr/share/php: /usr/share/php/smarty: /usr/share/pear and my master value is set to .: /usr/share/php: /usr/share/pear: /usr/share/php/pear: /usr/share/php/smarty The reason I am trying to learn how this works is because there is a file in the system I am working on titled Smarty.class.php, which I'm sure sounds very familiar to anyone who uses Smarty Templating Engine. One of the PHP files has the following includes: require_once("Smarty.class.php"); require_once("user_info_class.inc"); The file user_info_class.inc is in the same directory as the file making the include, which makes perfect sense to me, and is the way that I've always referenced files. I decided that I wanted to open up the Smarty.class.php file and had assumed it would be in the same directory, but it was not. After doing a bit of digging, I discovered those php_ini variables, and was finally able to locate the file in the directory usr/share/php/smarty/. So it would seem that when making an include, it follows some sort of order between the Local and Master values for the include_path. Assuming that my deductions were correct thus far, can someone explain the order in which PHP searches for the files to be included?

    Read the article

  • Preventing a button from responding to 'Enter' in ASP.net

    - by kd7iwp
    I have a master template that all the pages on my site use. In the template there is an empty panel. In the code-behind for the page an imagebutton is created in the panel dynamically in my Page_Load section (makes a call to the DB to determine which button should appear via my controller). On some pages that use this template and have forms on them, pressing the Enter key fires the click event on this imagebutton rather than the submit button in the form. Is there a simple way to prevent the imagebutton click event from firing unless it's clicked by the mouse? I'm thinking a javascript method is a hack, especially since the button doesn't even exist in the master template until the button is dynamically created on Page_Load (this is ugly since I can't simply do <% =btnName.ClientId %> to refer to the button's name in my aspx page). I tried setting a super-high tabindex for the image button and that did nothing. Also set the button to be the DefaultButton in its panel on the master template but that did not work either.

    Read the article

  • Java RMI Proxy issue

    - by Antony Lewis
    i am getting this error : java.lang.ClassCastException: $Proxy0 cannot be cast to rmi.engine.Call at Main.main(Main.java:39) my abstract and call class both extend remote. call: public class Call extends UnicastRemoteObject implements rmi.engine.Abstract { public Call() throws Exception { super(Store.PORT, new RClient(), new RServer()); } public String getHello() { System.out.println("CONN"); return "HEY"; } } abstract: public interface Abstract extends Remote { String getHello() throws RemoteException; } this is my main: public static void main(String[] args) { if (args.length == 0) { try { System.out.println("We are slave "); InetAddress ip = InetAddress.getLocalHost(); Registry rr = LocateRegistry.getRegistry(ip.getHostAddress(), Store.PORT, new RClient()); Object ss = rr.lookup("FILLER"); System.out.println(ss.getClass().getCanonicalName()); System.out.println(((Call)ss).getHello()); } catch (Exception e) { e.printStackTrace(); } } else { if (args[0].equals("master")) { // Start Master try { RMIServer.start(); } catch (Exception e) { e.printStackTrace(); } } Netbeans says the problem is on line 39 which is System.out.println(((Call)ss).getHello()); the output looks like this: run: We are slave Connecting 10.0.0.212:5225 $Proxy0 java.lang.ClassCastException: $Proxy0 cannot be cast to rmi.engine.Call at Main.main(Main.java:39) BUILD SUCCESSFUL (total time: 1 second) i am running a master in cmd listening on port 5225.

    Read the article

  • How can I rewrite the history of a published git branch in multiple steps?

    - by Frerich Raabe
    I've got a git repository with two branches, master and amazing_new_feature. The latter branch contains the work on, well, an amazing new feature. A colleague and me are both working on the same repository, and the two of us commit to both branches. Now the work on the amazing new feature finished, and a bit more than 100 commits were accumulated in the amazing_new_feature branch. I'd like to clean those commits up a bit (using git rebase -i) before merging the work into master. The issue we're facing is that it's quite a pain to rewrite/reorder all 100 commits in one go. Instead, what I'd like to do is: Rewrite/merge/reorder the first few commits in the amazing_new_feature branch and put the result into a dedicated branch which contains the 'cleaned up' history (say, a amazing_new_feature_ready_for_merge branch). Rebase the remaining amazing_new_feature branch on the amazing_new_feature_ready_for_merge branch. Repeat at 1. My idea is that at some point, all the work from amazing_new_feature should be in amazing_new_feature_ready_for_merge and then I can merge the latter into master. Is this a sensible approach, or are there better/easier/more fool-proff solutions to this problem? I'm especially scared about the second step of the above algorithm since it means rebasing a published branch. IIRC it's a dangerous thing to do.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >