Search Results

Search found 1751 results on 71 pages for 'builds'.

Page 9/71 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • SQL Server 2008 Cumulative Updates are available!

    - by AaronBertrand
    SQL Server 2008 Service Pack 2 Cumulative Update #9 KB article : http://support.microsoft.com/kb/2673382 Build number is 10.00.4330 7 fixes (5 in database engine, 2 in SSAS) Relevant for : Builds of SQL Server between 10.00.4000 and 10.00.4329 SQL Server 2008 Service Pack 3 Cumulative Update #4 KB article : http://support.microsoft.com/kb/2673383 Build number is 10.00.5775 10 fixes listed Relevant for : Builds of SQL Server between 10.00.5000 and 10.00.5774 As usual, I'll post my standard disclaimer...(read more)

    Read the article

  • Cumulative Updates available for SQL Server 2008 R2

    - by AaronBertrand
    Today the SQL Server Release Services Team has pushed out new cumulative updates for SQL Server 2008 R2: Cumulative Update #14 for SQL Server 2008 R2 RTM - KB article http://support.microsoft.com/kb/2703280 - build number 10.50.1817.0 - 7 fixes - relevant for builds between 10.50.1600 and 10.50.1816 Cumulative Update #7 for SQL Server 2008 R2 Service Pack 1 - KB article http://support.microsoft.com/kb/2703282 - build number 10.50.2817.0 - 24 fixes - relevant for builds between 10.50.2500 and 10.50.2816...(read more)

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encodin

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? I heard in the past of the project x264farm; unfortunately, it is not actively developed anymore, and isn't compatible with newer x264 builds. I'm looking for a current solution, which is compatible with newer builds of x264. Just to note this, I'm a Windows user, so I'm only looking for Windows solutions (or at the very least, Linux). I've also seen the ELDER distributed encoder, but the quality varies depending on the settings you use - I'd prefer a solution similar to x264farm as noted above (the documentation outlines the encoding process), but one that is compatible with new(er) x264 builds, and is preferably actively developed. Final Edit: Unfortunately, the bounty for this question has expired, and I haven't found a decent solution for this. So if at any time someone finds a new, distributed encoding solution for x264 (or any h.264 coded, for that matter) - please answer this question! I'd love to discover an ideal method to make this work!

    Read the article

  • WebSocket Samples in GlassFish 4 build 66 - javax.websocket.* package: TOTD #190

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket TOTD #189: Collaborative Whiteboard using WebSocket in GlassFish 4 The earlier blogs created a WebSocket endpoint as: import javax.net.websocket.annotations.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . Based upon the discussion in JSR 356 EG, the package names have changed to javax.websocket.*. So the updated endpoint definition will look like: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . The POM dependency is: <dependency> <groupId>javax.websocket</groupId> <artifactId>javax.websocket-api</artifactId> <version>1.0-b09</version> </dependency> And if you are using GlassFish 4 build 66, then you also need to provide a dummy EndpointFactory implementation as: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint(value="websocket", factory=MyEndpoint.DummyEndpointFactory.class)public class MyEndpoint { . . .   class DummyEndpointFactory implements EndpointFactory {    @Override public Object createEndpoint() { return null; }  }} This is only interim and will be cleaned up in subsequent builds. But I've seen couple of complaints about this already and so this deserves a short blog. Have you been tracking the latest Java EE 7 implementations in GlassFish 4 promoted builds ?

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Build vs Rebuild

    - by prash
    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. Build is the normal thing to do and is faster. Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. In practice, you never need to Clean. Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild <project name> builds or rebuilds the StartUp project. To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. The project name now appears in bold. Compile just compiles the source file currently being edited. Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. Ctrl-F7 is the shortcut key for Compile. All source files that have changed are saved when you request a build/rebuild, so you don't have to save them first. When you run your executable (F5 or Ctrl-F5), Visual Studio saves all your changed source files and builds anything that changed, so you don't need to explicitly do those steps every time. This allows for quick "trial and error" debugging. Incidentally, if you like those little Visual Studio keyboard shortcuts, you can download posters of the C# and the VB.Net ones, respectively (I am personally a big fan of using keyboard shortcuts :) ).   Visual Studio 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=92ced922-d505-457a-8c9c-84036160639f   Visual Studio 2005 C#: http://www.microsoft.com/downloads/details.aspx?FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe&DisplayLang=en VB.NET: http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en

    Read the article

  • JSR 308 Moves Forward

    - by abuckley
    I am pleased to announce a number of recent milestones for JSR 308, Annotations on Java Types: Adoption of JCP 2.8 Thanks to the agreement of the Expert Group, JSR 308 operates under JCP 2.8 from September 2012. There is a publicly archived mailing list for EG members, and a companion list for anyone who wishes to follow EG traffic by email. There is also a "suggestion box" mailing list where anyone can send feedback to the E.G. directly. Feedback will be discussed on the main EG list. Co-spec lead Prof. Michael Ernst maintains an issue tracker and a document archive. Early-Access Builds of the Reference Implementation Oracle has published binaries for all platforms of JDK 8 with support for type annotations. Builds are generated from OpenJDK's type-annotations/type-annotations forest (notably the langtools repo). The forest is owned by the Type Annotations project. Integration with Enhanced Metadata On the enhanced metadata mailing list, Oracle has proposed support for repeating annotations in the Java language in Java SE 8. For completeness, it must be possible to repeat annotations on types, not only on declarations. The implementation of repeating annotations on declarations is already in the type-annotations/type-annotations forest (and hence in the early-access builds above) and work is underway to extend it to types.

    Read the article

  • Why you need to tag your build servers in TFS

    - by Martin Hinshelwood
    At SSW we use gated check-in for all of our projects. The benefits are based on the number of developers you have working on your project. Lets say you have 30 developers and each developer breaks the build once per month. That could mean that you have a broken build every day! Gated check-ins help, but they have a down side that manifests as queued builds and moaning developers. The way to combat this is to have more build servers, but with that comes complexity. Inevitably you will need to install components that you would expect to be installed on target computers, but how do you keep track of which build servers have which bits? What about a geographically diverse team? If you have a centrally controlled infrastructure you might have build servers in multiple regions and you don’t want teams in Sydney copying files from Beijing and vice a versa on a regular basis. So, what is the answer. Its Tags. You can add a set of Tags to your agents and then set which tags to look for in the build definition. Figure: Open up your Build Controller Manager Select “Build | Manage Build Controllers…” to get a list of all of your controllers and he build agents that are associated with them. Figure: the list of build agents and their controllers Each of these Agents might be subtly different. For example only one of these agents has FTP software installed. This software is required for only one of the many builds we have set up. My ethos for build servers is to keep them as clean as possible and not to install anything that is not absolutely necessary. For me that means anything that does not add a *.target file is suspect, and should really be under version control and called via the command line from there. So, some of the things you may install are: Silverlight 4 SDK Visual Studio 2010 Visual Studio 2008 WIX etc You should not install things that will not end up on the target users computer. For a website that means something different to a client than to a server, but I am sure you get the idea. One thing you can do to make things easier is to create a tag for each of the things that you install. that way developers can find the things they need. We may change to using a more generic tagging structure (Like “Web Application” or “WinForms Application”) if this gets too unwieldy, but for now the list of tags is limited. Figure: Tags associated with one of our build agents Once you have your Build Agents all tagged up ALL your builds will start to fail This is because the default setting for a build is to look for an Agent that exactly matches the tags for the build, and we have not added any yet. The quick way to fix this is to change the “Tag Comparison Operator” from “ExactMatch” to “MatchAtLease” to get your build immediately working. Figure: Tag Comparison Operator changes to MatchAtLeast to get builds to run. The next thing to do is look for specific tags. You just select from the list of available tags and the controller will make sure you get to a build agent that uses them. Figure: I want Silverlight, VS2010 and WIX, but do not care about Location. And there you go, you can now have build agents for different purposes and regions within the same environment. You can also use name filtering, so if you have a good Agent naming convention you can filter by that for regions. For example, your Agents might be “SYDVMAPTFSBP01” and “SYDVMAPTFSBP02” so a name filter of “SYD*” would target all of the Sydney build agents. Figure: Agent names can be used for filtering as well This flexibility will allow you to build better software by reducing the likelihood of not having a certain dependency on the target machines. Figure: Setting the name filter based on server location  Used in combination there is a lot of power here to coordinate tens of build servers for multiple projects across multiple regions so your developers get the most out of your environment. Technorati Tags: ALM,TFBS,TFS 2010,TFS Admin

    Read the article

  • MySql: Problem when using a temporary table

    - by Alex
    Hi, I'm trying to use a temporary tables to store some values I need for a query. The reason of using a temporary table is that I don't want to store the data permanently so different users can modify it at the same time. That data is just stored for a second, so I think a temporary table is the best approach for this. The thing is that it seems that the way I'm trying to use it is not right (the query works if I use a permanent one). This is an example of query: CREATE TEMPORARY TABLE SearchMatches (PatternID int not null primary key, Matches int not null) INSERT INTO SearchMatches (PatternID, Matches) VALUES ('12605','1'),('12503','1'),('12587','2'),('12456','1'), ('12457','2'),('12486','2'),('12704','1'),(' 12686','1'), ('12531','2'),('12549','1'),('12604','1'),('12504','1'), ('12586','1'),('12548','1'),('12 530','1'),('12687','2'), ('12485','1'),('12705','1') SELECT pat.id, signatures.signature, products.product, versions.version, builds.build, pat.log_file, sig_types.sig_type, pat.notes, pat.kb FROM patterns AS pat INNER JOIN signatures ON pat.signature = signatures.id INNER JOIN products ON pat.product = products.id INNER JOIN versions ON pat.version = versions.id INNER JOIN builds ON pat.build = builds.id INNER JOIN sig_types ON pat.sig_type = sig_types.id, SearchMatches AS sm INNER JOIN patterns ON patterns.id = sm.PatternID WHERE sm.Matches <> 0 ORDER BY sm.Matches DESC, products.product, versions.version, builds.build LIMIT 0 , 50 Any suggestion? Thanks.

    Read the article

  • How to use MSBuild to target multiple versions of .NET Framework?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/soln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software (e.g. NAnt). I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • How to change the build directory of a Hudson job?

    - by mark
    Dear ladies and sirs. My C: drive is full. I wish to move the builds folder from the job to another location. I can cheat with the help of the JUNCTION utility to redirect the original builds folder, but I am interested to know if there is the Hudson way to do it right. Thanks.

    Read the article

  • Do not compile t4 file

    - by brian b
    Suddenly, after doing a TFS 2010 get, Visual Studio 2010 is attempting to compile my .tt file as if it was c#. Moreover, anytime I set it to "Build Action=None", Build Action gets mysteriously reset to Compile. This is breaking our builds on the desktop. I can get builds to work on the desktop by closing then reopening VS. Our builds on TFS are totally broken because of this. What to do? The template generates a (totally ok) c# file, so I need the project to build. I tried changing the file extension from .tt to .donotbuilddammit but that had no effect.

    Read the article

  • Eclipse antRunner command line build has wrong dependency build order

    - by Sam Jones
    My team's Eclipse RCP app builds fine from the Eclipse IDE. When I try to build it from the command line, using the antRunner application, it builds the plugins out of order -- a plugin builds before it's dependencies are built, and so can't resolve some of the needed classes. Where should I look to fix this? As far as I can tell, the dependencies are set up correctly (a feature that depends on a plugin that depends on another plugin). I have my build.properties and folder structure set up as specified here. Is there anything else I should look at?

    Read the article

  • Delete the sources from a build after a build

    - by Vizirship
    I have about 60 TFS builds that run on a bunch of machines that all build quite regularly. We're constantly running out of space and its getting frustrating seeing 80 gigs of TFS sources on our build machines. Hell, we used 20 gigs of hard drive space over the weekend! I'm looking for a way to delete the sources for the build immediately after the build. We really don't care all that much about speed, (we'd rather have builds actually complete) so downloading the sources again isn't an issue. Its mainly the SOURCE directories that take up space, not the drop folders, so retention policies don't really do anything for us. We don't care about the output of the builds, just whether or not they build successfully or not.

    Read the article

  • Getting \ building latest NHibernate build

    - by Alex Yakunin
    I'd like to get the latest NHibernate build or build it by my own. The build available at SourceForge is dated by Nov 2009, although I see there was a lot of activity later, especially related to LINQ development. So what is the best option? I can: Get the latest source code and try to build it. Are there any instructions for this? Get one of latest builds shared by someone else. Are there any people maintaining such builds? Please note, that I'm not interested in 8-month old builds - I need the latest code for tests (LINQ, performance). I know there is a similar question, but it looks like top answers there are outdated.

    Read the article

  • Converting a macro to an inline function

    - by Rob
    I am using some Qt code that adds a VERIFY macro that looks something like this: #define VERIFY(cond) \ { \ bool ok = cond; \ Q_ASSERT(ok); \ } The code can then use it whilst being certain the condition is actually evaluated, e.g.: Q_ASSERT(callSomeFunction()); // callSomeFunction not evaluated in release builds! VERIFY(callSomeFunction()); // callSomeFunction is always evaluated Disliking macros, I would instead like to turn this into an inline function: inline VERIFY(bool condition) { Q_ASSERT(condition); } However, in release builds I am worried that the compiler would optimise out all calls to this function (as Q_ASSERT wouldn't actually do anything.) I am I worrying unnecessarily or is this likely depending on the optimisation flags/compiler/etc.? I guess I could change it to: inline VERIFY(bool condition) { condition; Q_ASSERT(condition); } But, again, the compiler may be clever enough to ignore the call. Is this inline alternative safe for both debug and release builds?

    Read the article

  • What "bad practice" do you do, and why?

    - by coppro
    Well, "good practice" and "bad practice" are tossed around a lot these days - "Disable assertions in release builds", "Don't disable assertions in release builds", "Don't use goto.", we've got all sorts of guidelines above and beyond simply making your program work. So I ask of you, what coding practices do you violate all the time, and more importantly, why? Do you disagree with the establishment? Do you just not care? Why should everyone else do the same? cross links: What's your favorite abandoned rule? Rule you know you should follow but don't

    Read the article

  • QotD - Nicolas de Loof on AdoptOpenJDK

    - by $utils.escapeXML($entry.author)
    The AdoptOpenJDK program is an initiative to get as many Java users as possible to try the OpenJDK 8 preview builds, so that feedback is collected before JDK 8 is officially released. There are many ways to contribute to this program (as explained on the wiki), but the most basic one is to start testing your own project on the Java 8 platform. CloudBees can help you there, as we just made OpenJDK 8 (preview) available on DEV@cloud so that you can configure a build job to check project compatibility. We will upgrade the JDK for all recent preview builds until JDK 8 is finalNicolas de Loof, Support Engineer at Cloudbees in a blog post on AdoptOpenJDK.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >