Search Results

Search found 17579 results on 704 pages for 'mercurial build'.

Page 271/704 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Maven compile plugin

    - by phanikiran
    Hi every body, in pom of a project, i have added dependency with scope compile . which is a jar file which contains some class file and jar's as well. my current java file needs internal jars of dependent jar to compile. But maven compile goal returning compilation error . :banghead: All the jar's needed to compile are in the single jar file which is added in dependency............................. Please help me! my pom: <dependency> <groupId>eagle</groupId> <artifactId>zkui</artifactId> <version>360LTS</version> <type>jar</type> <scope>compile</scope> </dependency> <build> <sourceDirectory>./src/main/java/</sourceDirectory> <outputDirectory>./target/classes/</outputDirectory> <finalName>${project.groupId}-${project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.2</version> <configuration> <source>1.6</source> <target>1.6</target> </plugin> </plugins> </build> </project> error: package org.zkoss.zk.ui does not exist this package org.zkoss.zk.ui is in jar file zkex.jar which is in dependency jar eagle:zkui:360LTS jar file Please Help ME!!!! :jumpingjoy: Advance Thanks

    Read the article

  • Finding Common Byte Sequences in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Unable to change URL for .NET reference to dynamic web service

    - by Malvineous
    Hi all, I have a web reference added to a C# .NET project. The URL for the web reference needs to change depending on whether I'm building for a development, staging or production environment. I've set the web service to be dynamic, which supposedly means it takes the URL from my app.config file. When I perform a build it overwrites the app.config with the required file which contains the correct URL (different file for each of dev/staging/production.) I then go into the solution properties and make sure the Settings.settings file is updated with the app.config changes. However when I view the properties for the web service, it is still showing the old URL, despite it being dynamic, and supposed to be reading from my settings file (even after closing and reopening the project/solution.) The app.config and the settings file all have the new URL, but the web reference doesn't notice it has changed. If I do a build it ignores the URL in the settings file and tries to connect to the last URL manually typed into the web reference's properties. Typing a URL into these properties correctly updates the app.config and .settings files, so the link is definitely there. I'm a bit new to .NET but it seems to me the purpose of setting the service to be dynamic is so that you can change the URL elsewhere, but when I do this it just gets ignored! Am I doing something wrong?

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • How to make a request from an android app that can enter a Spring Security secured webservice method

    - by johnrock
    I have a Spring Security (form based authentication) web app running CXF JAX-RS webservices and I am trying to connect to this webservice from an Android app that can be authenticated on a per user basis. Currently, when I add an @Secured annotation to my webservice method all requests to this method are denied. I have tried to pass in credentials of a valid user/password (that currently exists in the Spring Security based web app and can log in to the web app successfully) from the android call but the request still fails to enter this method when the @Secured annotation is present. The SecurityContext parameter returns null when calling getUserPrincipal(). How can I make a request from an android app that can enter a Spring Security secured webservice method? Here is the code I am working with at the moment: Android call: httpclient.getCredentialsProvider().setCredentials( //new AuthScope("192.168.1.101", 80), new AuthScope(null, -1), new UsernamePasswordCredentials("joeuser", "mypassword")); String userAgent = "Android/" + getVersion(); HttpGet httpget = new HttpGet(MY_URI); httpget.setHeader("User-Agent", userAgent); httpget.setHeader("Content-Type", "application/xml"); HttpResponse response; try { response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); ... parse xml Webservice Method: @GET @Path("/payload") @Produces("application/XML") @Secured({"ROLE_USER","ROLE_ADMIN","ROLE_GUEST"}) public Response makePayload(@Context Request request, @Context SecurityContext securityContext){ Payload payload = new Payload(); payload.setUsersOnline(new Long(200)); if (payload == null) { return Response.noContent().build(); } else{ return Response.ok().entity(payload).build(); } }

    Read the article

  • How to get database table header information into an CSV File.

    - by Rachel
    I am trying to connect to the database and get current state of a table and update that information into csv file, with below mentioned piece of code, am able to get data information into csv file but am not able to get header information from database table into csv file. So my questions is How can I get Database Table Header information into an CSV File ? $config['database'] = 'sakila'; $config['host'] = 'localhost'; $config['username'] = 'root'; $config['password'] = ''; $d = new PDO('mysql:dbname='.$config['database'].';host='.$config['host'], $config['username'], $config['password']); $query = "SELECT * FROM actor"; $stmt = $d->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); $data = fopen('file.csv', 'w'); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Header information meaning: Vehicle Build Model car 2009 Toyota jeep 2007 Mahindra So header information for this would be Vehicle Build Model Any guidance would be highly appreciated.

    Read the article

  • Using JSF, PrimeFaces and JPA: Create Basic WebApp without using Generated CRUD Classes, Forms, etc

    - by user2774489
    I am trying to build a basic CRUD application with NetBeans 7.4, JSF, PrimeFaces and JPA using MySQL. I have successfully done this by using the NetBeans wizards. I want to do this from scratch, no wizards. There seems to be a lack of support for the combo of JSF, PrimeFaces and JPA. When I say "lack", I mean a full example (I might be asking too much), without using the CRUD auto-gen templates/classes AND shows actual queries coded and passed to the datatables(primefaces). YouTube is full of non-English speaking examples using Hibernate (not JPA) and other examples that show flashy GUI's with no code. So far I understand you need an @Entity class (provides the physical build of the tables), a Controller (serializable) and the .xhtml web page to show the datatable.. what else? Also, I'm not seeing any posts or examples where queries are using with JPA/JSF and how they are tied together (in one place). I need to connect the dots here so that I can leverage JSF/JPA to create simple queries to populate my PF DataTables. I've read the blogs and I've googled the intranets until I'm blue in the face. Sending me a list of URL's to read to learn about each product is something I've already done. I get what they do independently, but am looking for the "How do they all connect" answer with maybe some basic code examples!! :)

    Read the article

  • m2eclipse: How to set Eclipse project settings when importing a maven project?

    - by Marius Andreiana
    Using m2eclipse Eclipse plugin, everybody on the dev team should be able to checkout source code, import Maven project in Eclipse and be good to go. I saw m2eclipse is being merged into Eclipse 3.7, and maven-eclipse-plugin won't be maintained any longer, so I'm looking for a m2eclipse-based solution (without running "mvn eclipse:clean eclipse:eclipse" before project import, which is what maven-eclipse-plugin does). maven-eclipse-plugin allows this in pom.xml <additionalConfig> <file> <name>.settings/com.google.gdt.eclipse.core.prefs</name> <content><![CDATA[ eclipse.preferences.version=2 jarsExcludedFromWebInfLib= warSrcDir=${project.build.directory}/${project.build.finalName} warSrcDirIsOutput=true ]]> </content> </file> The more general question is How would m2eclipse do something similar? For some cases, just saving the eclipse .settings/prefs file works (e.g. org.eclipse.jdt.ui.prefs), but in this case, com.google.gdt.eclipse.core.prefs is always overwritten on m2eclipse project import. A specific question is asked here, with no reply. Thanks! UPDATE: Not possible now, see request

    Read the article

  • Adding jQuery event handler to expand div when a link is clicked

    - by hollyb
    I'm using a bit of jQuery to expand/hide some divs. I need to add an event handler so that when a link is clicked, it opens a corrisponding div. Each toggled div will have a unique class assigned to it. I am looking for some advice about how to build the event handler. The jQuery $(document).ready(function(){ $(".toggle_container:not(:first)").hide(); $(".toggle_container:first").show(); $("h6.trigger:first").addClass("active"); $("h6.trigger").click(function(){ $(this).toggleClass("active"); $(this).next(".toggle_container").slideToggle(300); }); The css: // uses a background image with an on (+) and off (-) state stacked on top of each other h6.trigger { background: url(buttonBG.gif) no-repeat;height: 46px;line-height: 46px;width: 300px;font-size: 2em;font-weight: normal;} h6.trigger a {color: #fff;text-decoration: none; display: block;} h6.active {background-position: left bottom;} .toggle_container { overflow: hidden; } .toggle_container .block {padding: 20px;} The html has a list of links, such as: <a href="#">One</a> <a href="#">Two</a> and the coorisponding divs to open: <h6 class="trigger">Container one</h6> <div class="toggle_container"> div one </div> <h6 class="trigger">Container two</h6> <div class="toggle_container Open"> div one </div> As I mentioned, I will be assigning a unique class to facilitate this. Any advice? Thanks! To clarify, i'm looking for some advice to build an event handler to toggle open a specific div when a link is clicked from a different part of the page, from a nav for instance.

    Read the article

  • Getting broken link error whle Using App Engine service accounts

    - by jade
    I'm following this tutorial https://developers.google.com/bigquery/docs/authorization#service-accounts-appengine Here is my main.py code import httplib2 from apiclient.discovery import build from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from oauth2client.appengine import AppAssertionCredentials # BigQuery API Settings SCOPE = 'https://www.googleapis.com/auth/bigquery' PROJECT_NUMBER = 'XXXXXXXXXX' # REPLACE WITH YOUR Project ID # Create a new API service for interacting with BigQuery credentials = AppAssertionCredentials(scope=SCOPE) http = credentials.authorize(httplib2.Http()) bigquery_service = build('bigquery', 'v2', http=http) class ListDatasets(webapp.RequestHandler): def get(self): datasets = bigquery_service.datasets() listReply = datasets.list(projectId=PROJECT_NUMBER).execute() self.response.out.write('Dataset list:') self.response.out.write(listReply) application = webapp.WSGIApplication( [('/listdatasets(.*)', ListDatasets)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() Here is my app.yaml file code application: bigquerymashup version: 1 runtime: python api_version: 1 handlers: - url: /favicon\.ico static_files: favicon.ico upload: favicon\.ico - url: .* script: main.py And yes i have added app engine service account name in google api console Team tab with can edit permissions. When upload the app and try to access the link it says Oops! This link appears to be broken. Ealier i ran this locally and tried to access it using link localhost:8080.Then i thought may be running locally might be giving the error so i uploaded my code to http://bigquerymashup.appspot.com/ but still its giving error.

    Read the article

  • Does anyone knows the algorithm how Facebook detects the images when adding a link

    - by Edelcom
    When you add a link to your Facebook page, after some processing, Facebook presents you a next/prev button to choose an image linked to the url your are inserting. Obviously, Facebook reads the html-page and displays the images found on the url you insert. Does anyone knows what algorithm Facebook uses to decide what images to show ? If I insert a link to : http://www.staplijst.be/lachende-wandelaars-aalter-aktivia-003.asp, only 11 images are detected. The one I want, the one at the top right corner, is not included in the list. If I insert a link to http://www.staplijst.be/stichting-kennedymars-rijsbergen-zundert-nederland-knblo-nl-81996.asp, 19 images are displayed (including the one I want (the one at the right top corner of the text area). Both pages are build using asp code but are functionally the same. I thought that it has something to do with the image size, but can't find any deciding factor there. I will investigate some furhter, because if I know what Facebook is looking for, I can make sure that the correct images are included on the page (since they are dynamic pages build with classic asp). But if anyone has any idea ? Help would be appreciated.

    Read the article

  • Installing PIL on Cygwin

    - by Dustin
    I've been struggling all morning to get PIL installed on Cygwin. The errors I get are not consistent with common errors I find using Google. Perhaps a linux guru can see an obvious problem in this output: $ python setup.py install running install running build running build_py running build_ext building '_imaging' extension gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DHAVE_LIBZ -I/usr/include/freetype2 -IlibImaging -I/usr/include -I/usr/include/python2.5 -c _imaging.c -o build/temp.cygwin-1.7.2-i686-2.5/_imaging.o In file included from /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/syslimits.h:7, from /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/limits.h:11, from /usr/include/python2.5/Python.h:18, from _imaging.c:75: /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/limits.h:122:61: limits.h: No such file or directory In file included from _imaging.c:75: /usr/include/python2.5/Python.h:32:19: stdio.h: No such file or directory /usr/include/python2.5/Python.h:34:5: #error "Python.h requires that stdio.h define NULL." /usr/include/python2.5/Python.h:37:20: string.h: No such file or directory /usr/include/python2.5/Python.h:39:19: errno.h: No such file or directory /usr/include/python2.5/Python.h:41:20: stdlib.h: No such file or directory /usr/include/python2.5/Python.h:43:20: unistd.h: No such file or directory /usr/include/python2.5/Python.h:55:20: assert.h: No such file or directory In file included from /usr/include/python2.5/Python.h:57, from _imaging.c:75: /usr/include/python2.5/pyport.h:7:20: stdint.h: No such file or directory In file included from /usr/include/python2.5/Python.h:57, from _imaging.c:75: /usr/include/python2.5/pyport.h:89: error: parse error before "Py_uintptr_t" /usr/include/python2.5/pyport.h:89: warning: type defaults to `int' in declaration of `Py_uintptr_t' /usr/include/python2.5/pyport.h:89: warning: data definition has no type or storage class /usr/include/python2.5/pyport.h:90: error: parse error before "Py_intptr_t" /usr/include/python2.5/pyport.h:90: warning: type defaults to `int' in declaration of `Py_intptr_t' ... more lines like this

    Read the article

  • Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • Moving from SVN to HG : branching and backup

    - by rorycl
    My company runs svn right now and we are very familiar with it. However, because we do a lot of concurrent development, merging can become very complicated.. We've been playing with hg and we really like the ability to make fast and effective clones on a per-feature basis. We've got two main issues we'd like to resolve before we move to hg: Branches for erstwhile svn users I'm familiar with the "4 ways to branch in Mercurial" as set out in Steve Losh's article. We think we should "materialise" the branches because I think the dev team will find this the most straightforward way of migrating from svn. Consequently I guess we should follow the "branching with clones" model which means that separate clones for branches are made on the server. While this means that every clone/branch needs to be made on the server and published separately, this isn't too much of an issue for us as we are used to checking out svn branches which come down as separate copies. I'm worried, however, that merging changes and following history may become difficult between branches in this model. Backup If programmers in our team make local clones of a branch, how do they backup the local clone? We're used to seeing svn commit messages like this on a feature branch "Interim commit: db function not yet working". I can't see a way of doing this easily in hg. Advice gratefully received. Rory

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • New projects not built when target platform is set explicitly

    - by stiank81
    I create a new solution with one project, and then change the target platform from "Any CPU" to "x86". After this new projects added doesn't get built by default, and their target platform doesn't follow the global settings. Why?! Looking at the configuration manager new projects added are not checked to "Build", and they get target platform "Any CPU" instead of the globally set x86. Why is this happening? I expect new projects too to get the globally set and defined x86 target platform.. Some things I've tried: Toggle global platform back to Any CPU, and then to x86 again. No change.. Choosing platform explicitly for the new project. x86 is not available in the list, and when I say <New..> and try adding it I'm not allowed as ".. a solution platform with the same name already exists.". On the build properties for the new project I can't change the platform in the Configuration section, but I can set "Platform target" to x86 in the General section. It is however not clear whether this actually makes a difference, and it wouldn't respond if I change the target platform globally later. Initially I thought this was a problem from converting my solution from VS2008 to VS2010, but the problem applies both places. I.e. when I create a solution in VS2008 and just stay in VS2008 I still get the problem.

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • Problems compiliing c++ code using cygwin

    - by user343403
    I am trying to compile some source code in cygwin (in windows 7) and get the following error when I run the make file g++ -DHAVE_CONFIG_H -I. -I.. -I.. -Wall -Wextra -Werror -g -O2 -MT libcommon_a Fcntl.o -MD -MP -MF .deps/libcommon_a-Fcntl.Tpo -c -o libcommon_a-Fcntl.o `test -f 'Fcntl.cpp' || echo './'`Fcntl.cpp Fcntl.cpp: In function int setCloexec(int): Fcntl.cpp:8: error: 'F_GETFD' was not declared in this scope Fcntl.cpp:8: error: 'fcntl' was not declared in this scope Fcntl.cpp:11: error: 'FD_CLOEXEC' was not declared in this scope Fcntl.cpp:12: error: 'F_SETFD' was not declared in this scope make[4]: *** [libcommon_a-Fcntl.o] Error 1 make[4]: Leaving directory `/abyss-1.1.2/Common' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/abyss-1.1.2' make[2]: *** [all] Error 2 make[2]: Leaving directory `/abyss-1.1.2' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/cygdrive/c/Users/Martin/Documents/NetBeansProjects/abyss-1.1.2_1' make: *** [.build-impl] Error 2 The problem file is:- #include "Fcntl.h" #include <fcntl.h> /* Set the FD_CLOEXEC flag of the specified file descriptor. */ int setCloexec(int fd) { int flags = fcntl(fd, F_GETFD, 0); if (flags == -1) return -1; flags |= FD_CLOEXEC; return fcntl(fd, F_SETFD, flags); } I don't understand what is going on, the file fcntl.h is available and the varaiables that it says were not declared in this scope do not give an error when I compile the file on its own Any help would be much appreciated Many Thanks

    Read the article

  • java 6 web services share domain specific classes between server and client

    - by user173446
    Hi all, Context: Considering below defined Engine class being parameter of some webservice method. As we have both server and client in java we may have some benefits (???) in sharing Engine class between server and client ( i.e we may put in a common jar file to be added to both client and server classpath ) Some benefits would be : we keep specific operations like 'brushEngine' in same place build is faster as we do not need in our case to generate java code for client classes but to use them from the server build) if we later change server implementation for 'brushEngine' this is reflected automatically in client . Questions: How to share below detailed Engine class using java 6 tools ( i.e wsimport , wsgen etc )? Is there other tools for java that can achieve this sharing ? Is sharing a case that java 6 web services support is missing ? Can this case be reduced to other web service usage patterns? Thanks. Code: public class Engine { private String engineData; public String getData(){ return data; } public setData(String value){ this.data = value; } public void brushEngine(){ engineData = "BrushedEngine"+engineData; } }

    Read the article

  • g++ Linking Error on Mac while compiling FFMPEG

    - by Saptarshi Biswas
    g++ on Snow Leopard is throwing linking errors on the following piece of code test.cpp #include <iostream> using namespace std; #include <libavcodec/avcodec.h> // required headers #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); // offending library call return 0; } When I try to compile this using the following command g++ test.cpp -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I get the error Undefined symbols: "av_register_all()", referenced from: _main in ccUD1ueX.o ld: symbol(s) not found collect2: ld returned 1 exit status Interestingly, if I have an equivalent c code, test.c #include <stdio.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); return 0; } gcc compiles it just fine gcc test.c -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I am using Mac OS X 10.6.5 $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) FFMPEG's libavcodec, libavformat etc. are C libraries and I have built them on my machine like thus: ./configure --enable-gpl --enable-pthreads --enable-shared \ --disable-doc --enable-libx264 make && sudo make install As one would expect, libavformat indeed contains the symbol av_register_all $ nm /usr/local/lib/libavformat.a | grep av_register_all 0000000000000000 T _av_register_all 00000000000089b0 S _av_register_all.eh I am inclined to believe g++ and gcc have different views of the libraries on my machine. g++ is not able to pick up the right libraries. Any clue?

    Read the article

  • mvn deploy to AWS (ssh via distributionManagement)

    - by Dexter
    I am working on deploying the WAR file to AWS using Maven. I am planning to use 'mvn deploy' for the same which would ssh the war file to AWS. I am following http://maven.apache.org/plugins/maven-deploy-plugin/examples/deploy-ssh-external.html. This is my POM file <project> ... <distributionManagement> <repository> <id>ssh-aws</id> <url>scpexe://<ec2 instance>.compute-1.amazonaws.com</url> </repository> </distributionManagement> <build> <extensions> <!-- Enabling the use of FTP --> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.0-beta-6</version> </extension> </extensions> </build> .. </project> This is my settings.xml <server> <id>ssh-aws</id> <username>aws-user</username> </server> The only issue is that I am unable to figure out the url in distributionManagement node of pom.xml. I am able to ssh in the AWS server by the following. ssh -i ~/pemfile/pemfile-key.pem aws-user@<ec2 instance>.compute-1.amazonaws.com But when I run mvn clean deploy, I receive this.. Exit code: 1 - Permission denied (publickey). -> [Help 1] Thanks in advance.

    Read the article

  • Unknown and unreproducible crash causes App Store rejection

    - by Daniel Johnson
    After submitting our application several times, we continue to receive the following response: Thank you for submitting My App to the App Store. We've reviewed your application and determined that we cannot post this version of your iPad application to the App Store because My App is crashing on iPad running iPhone OS 3.2 and Mac OS X 10.6.2. My App crashes upon launch. Unfortunately, crash logs have not been generated. However, resigning the same build with the AdHoc entitlements and loading the build onto the device yields no such crash. After a number of attempts, the application simply does not crash as reported by the reviewer. Furthermore, the reviewer does not provide any useful logging that may have been generated by SpringBoard such as an exit status or event if it had worked properly for any other device. There are no calls to explicitly exit or quit the application in the code line and yet the application terminates on startup. What might cause an application to terminate in such a manner? Under what conditions is an application tested that might not be found under a development environment? Could it be a result of a signing issue that the submission validation system is simply unable to catch? Thanks in advance.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >