Search Results

Search found 14148 results on 566 pages for '2008'.

Page 329/566 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • How to control the download url for dotNetFx35setup.exe without using the Visual Studio bootstrapper

    - by tronda
    Previously I've used to Visual Studio 2008 setup.bin to generate a bootstrapper. I had some issues with it which were difficult to resolve and turned to dotNetInstaller. One great thing with the VS 2008 generated bootstrapper, was that I was able to control the download location for the .NET framework. By using the MSBuild task I could specify the componentsLocation: <GenerateBootstrapper ApplicationFile="$(TargetFileName)" ApplicationName="MyApp" ApplicationUrl="http://$(InstallerHost)$(DownloadUrl)" BootstrapperItems="@(BootstrapperFile)" CopyComponents="True" ComponentsLocation="Relative" OutputPath="$(OutputPath)" Path="C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\" /> Here I'm able to use the ComponentsLocation="Relative" and the bootstrapper would download from our own web server - which is what I want. When I no longer have the VS 2008 bootstrapper, I would like to have the same feature. The new boostrapper downloads the dotNetFx35setup.exe from a defined server, but the problem is that this ".NET bootstrapper" connects to Microsoft's servers for downloading the needed packages. Trying to run the following command: dotNetFx35setup.exe /? did not show any options to control the download location. The web server will contain the package structure which the Windows SDK (v6.0A) has within the Bootstrapper\Packages directory. The structure looks like this: Packages DotNetFX DotNetFX30 DotNetFX35 DotNetFx35Client DotNetFx35SP1 ..... When I state a dependency to the .NET Framework 3.5, the DotNetFX35 directory structure gets copied into the bin/Debug directory. I've copied this directory onto the web server and it looks like this: DotNetFX35 dotNetFX20 dotNetFX30 dotNetFX35 x64 netfx35_x64.exe x86 netfx35_x86.exe dotNetMSP dotNetFx35setup.exe The other directories contains mainly MSI, MSP and MSU files. So any pointers on how to control downloading of the .NET framework. Shouldn't I use the dotNetFx35setup.exe file? If not - which should I use?

    Read the article

  • Unicode Collations problem ?

    - by Bayonian
    (.NET 3.5 SP1, VS 2008, VB.NET, MSSQL Server 2008) I'm writing a small web app to test the Khmer Unicode and Lao Unicode. I have a table that store text in Khmer Unicode with the following structure : [t_id] [int] IDENTITY(1,1) NOT NULL [t_chid] [int] NOT NULL [t_vn] [int] NOT NULL [t_v] [nvarchar](max) NOT NULL I can use Linq to SQL to do CRUD normally. The text display properly on the web page, even though I didn't change the default collation of MSSQL Server 2008. When it comes to search the column [t_v], the page will take a very long time to load and in fact, it loads every row of that column. It never compares with the "key word" criteria that I use for the search. Here's my query for the search : Public Shared Function SearchTestingKhmerTable(ByVal keyword As String) As DataTable Dim db As New BibleDataClassesDataContext() Dim query = From b In db.khmer_books _ From ch In db.khmer_chapters _ From v In db.testing_khmers _ Where v.t_v.Contains(keyword) And ch.kh_book_id = b.kh_b_id And v.t_chid = ch.kh_ch_id _ Select b.kh_b_id, b.kh_b_title, ch.kh_ch_id, ch.kh_ch_number, v.t_id, v.t_vn, v.t_v Dim dtDataTableOne = New DataTable("dtOne") dtDataTableOne.Columns.Add("bid", GetType(Integer)) dtDataTableOne.Columns.Add("btitle", GetType(String)) dtDataTableOne.Columns.Add("chid", GetType(Integer)) dtDataTableOne.Columns.Add("chn", GetType(Integer)) dtDataTableOne.Columns.Add("vid", GetType(Integer)) dtDataTableOne.Columns.Add("vn", GetType(Integer)) dtDataTableOne.Columns.Add("verse", GetType(String)) For Each r In query dtDataTableOne.Rows.Add(New Object() {r.kh_b_id, r.kh_b_title, r.kh_ch_id, r.kh_ch_number, r.t_id, r.t_vn, r.t_v}) Next Return dtDataTableOne End Function Please note that I use the exact same code and database design with Lao Unicode and it works just fine. I get the returned query as expected for the search. I can't figure out what the problem with searching for query in Khmer table.

    Read the article

  • How to utilize WebDev.WebServer.exe (VS Web Server) in x64?

    - by Nick Craver
    Visual Studio is x86 until at least the 2010 release comes around, my question is can anyone think of a way or know of an independent ASP.NET debug server that's x64 for 2008? Background: Our ASP.NET application runs against Oracle as the DB. Since we're on 64-bit servers for memory concerns later, we need to use Oracle's 64-bit drivers (Instant Client). Setup: x64 OS (XP or Windows 7) IIS (5 or 7, both x64 App Pools) Oracle 64-bit Instant Client (Separate Directory, in the PATH) Visual Studio 2008 SP1 In IIS the application pool runs as 64-bit, uses the Oracle drivers as intended, however since WebDev.WebServer.exe is 32-bit you'll get a BadImageFormatException because it's trying to load 64-bit driver DLLs in a 32-bit environment. All of our developers would like to be able to use the quick debug server via Visual Studio 2008, but since it runs as 32-bit we're unable to. Some problems we run into are during application startup, so although we're attaching to the IIS process sometimes that isn't enough to track an issue down. Are there any alternatives, or work-arounds? We would like to match our Dev/Val/Prod tiers as much as possible, so everything running in x64 would be ideal.

    Read the article

  • Help/Questions About New Team Foundation Server 2010 Installation

    - by user579218
    Hello. Before starting down the TFS2010 installation process, I have a few questions I'm hoping the community can help me with. We're planning on a single-server installation of TFS2010. Initially, we want version/source control and build services, but not reporting or SharePoint. We may add reporting and SharePoint capabilities later. Our environment will be Windows Server 2008 R2 (x64), SQL Server 2008 R2 (x64), Office 2010 (x86), Visual Studio 6 and 2010, and, of course, Team Foundation Server 2010. Can I install TFS2010 on a server that is on our domain? It's not a domain controller, it's just a member server on the domain. Should I install TFS2010 before or after putting the server on the domain? We have six developers that will be logging into their local development computers (which are also on the same domain) using their domain user accounts, do I add each domain user to the TFS2010 server's security groups? If so, which one(s)? Can I or should I use a domain user account as the TFS2010 service account? Or, should I just use Network Service? The TFS2010 install guide notes that none of the service accounts should belong to the Administrators security group, so which security group(s) are recommended for the service account(s)? We're planning on using a local instance of SQL Server 2008 R2 Standard with TFS2010, what service account should we use? Should we use the same domain account as TFS2010 or Local System or ?? The TFS2010 install guide isn't very specific on this. Since we're planning on this server being both the version/source control and build server, should we install our development environments (VS6, VS2010, Access2010) before installing TFS2010? Or does it matter? Thanks in advance for answering these questions.

    Read the article

  • Visual studio 2010 setup project problem.

    - by Guru
    Hi there, I've made an application that uses .NET framework 3.5 SP1 and SQL Server 2008 Express. Application is fine and now i'm going to to make a setup project for this. When I first build my setup it was fine as all the prerequisites were not included in setup. But I want my setup to install .NET 3.5 SP1 and SQL SERVER 2008 Express also. So for this I've changed the options in setup project's properties from "Download prerequisites from following location" to "Download prerequisites from the same location as my application". In addition to that I've also checked the options above like .NET 3.5 SP1 and SQL Server 2008 Express etc. After doing all this I build my project again. This time I'm Getting 57 Errors. Error 1 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 2 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 3 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 4 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup As the question will become too large so I'm just pasting 3 errors but there are totally 57 errors. Please help me . Thanks in advance Guru

    Read the article

  • SUA + Visual Studio + pthreads

    - by vasek7
    Hi, I cannot compile this code under SUA: #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <pthread.h> void * thread_function(void *arg) { printf("thread_function started. Arg was %s\n", (char *)arg); // pause for 3 seconds sleep(3); // exit and return a message to another thread // that may be waiting for us to finish pthread_exit ("thread one all done"); } int main() { int res; pthread_t a_thread; void *thread_result; // create a thread that starts to run ‘thread_function’ pthread_create (&a_thread, NULL, thread_function, (void*)"thread one"); printf("Waiting for thread to finish...\n"); // now wait for new thread to finish // and get any returned message in ‘thread_result’ pthread_join(a_thread, &thread_result); printf("Thread joined, it returned %s\n", (char *)thread_result); exit(0); } I'm running on Windows 7 Ultimate x64 with Visual Studio 2008 and 2010 and I have installed: Windows Subsystem for UNIX Utilities and SDK for Subsystem for UNIX-based Applications in Microsoft Windows 7 and Windows Server 2008 R2 Include directories property of Visual Studio project is set to "C:\Windows\SUA\usr\include" What I have to configure in order to compile and run (and possibly debug) pthreads programs in Visual Studio 2010 (or 2008)?

    Read the article

  • Linq to SQL Problem System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager

    - by luckyluke
    I have a really tricky thing going up here. My project has around 100 tables and they are all mapped by LINQ. Everything works fine in a dev and test environment. These enviroments are MS Win 2008 r2 servers with SQL 2008 sp1 databases. IIS and SQL are on a different machines. Now on production enviroment which is MS Win 2003 x64 web farm + geoclustered SQL 2008 IT DOES not work. All I get is the exception System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager3.TryCreateKeyFr>om Values(Object[] values, MultiKey& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache2.Find(Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at ERS.IIMP.Services.ExposuresSrv.Update(Int32 ExpID, Int32 AssID) Services\ExposuresSrv.cs` My question is What the hell. They have precisely the same DBML, the DB has exactly THE SAME structure (when I get the DB from prod to TEST and mount it eveything works just great), the binaries on the WEB Server are the same. I seriously do not know what to do.... Did anyone found that Linq works on one env and does not on the second?? I mam really lost here. I really hope You can help me:)

    Read the article

  • Parse Text using scanner useDelimiter

    - by Brian
    Looking to parse the following text file: Sample text file: <2008-10-07text entered by user<2008-11-26additional text entered by user I would like to parse the above text so that I can have three variables: v1 = 2008-10-07 v2 = text entered by user v3 = Ted Parlor v1 = 2008-11-26 v2 = additional text entered by user v3 = Ted Parlor I attempted to use scanner and useDelimiter, however, I'm having issue on how to set this up to have the results as stated above. Here's my first attempt: enter code here import java.io.*; import java.util.Scanner; public class ScanNotes { public static void main(String[] args) throws IOException { Scanner s = null; try { //String regex = "(?<=\<)([^\*)(?=\)"; s = new Scanner(new BufferedReader(new FileReader("cur_notes.txt"))); s.useDelimiter("[<]+"); while (s.hasNext()) { String v1 = s.next(); String v2= s.next(); System.out.println("v1= " + v1 + " v2=" + v2); } } finally { if (s != null) { s.close(); } } } } The results is as follows: v1= 2008-10-07text entered by user v2=Ted Parlor What I desire is: v1= 2008-10-07 v2=text entered by user v3=Ted Parlor v1= 2008-11-26 v2=additional text entered by user v3=Ted Parlor Any help that would allow me to extract all three strings separately would be greatly appreciated.

    Read the article

  • #Error showing up in multiple LEFT JOIN statement Access query when value should be NULL

    - by lar
    I'm trying to return an ID's last 4 years of data, if existing. The table (call it A_TABLE) looks like this: ID, Year, Val The idea behind the query is this: for each ID/Year in the table, LEFT JOIN with Year-1, Year-2, and Year-3 (to get 4 years of data) and then return Val for each year. Here's the SQL: SELECT a.ID, a.year AS [Year], a.Val AS VAL, a1.year AS [Year-1], a1.Val AS [VAL-1], a2.year AS [Year-2], a2.Val AS [VAL-2], a3.year AS [Year-3], a3.Val AS [VAL-3] FROM ( ([A_TABLE] AS a LEFT JOIN [A_TABLE] AS a1 ON (a.ID = a1.ID) AND (a.year = a1.year+1)) LEFT JOIN [A_TABLE] AS a2 ON (a.ID = a2.ID) AND (a.year = a2.year+2)) LEFT JOIN [A_TABLE] AS a3 ON (a.ID = a3.ID) AND (a.year = a3.year+3) The problem is that, for past years where there is no data (eg, Year-1), I see "#Error" in the appropriate VAL column (eg, [VAL-1]). The weird thing is, I see the expected "null" in the Year column (eg, [YEAR-1]). Some sample data: ID YEAR VAL Dave 2004 1 Dave 2006 2 Dave 2007 3 Dave 2008 5 Dave 2009 0 outputs like this: ID YEAR VAL YEAR-1 VAL-1 YEAR-2 VAL-2 YEAR-3 VAL-3 Dave 2004 1 #Error #Error #Error Dave 2006 2 #Error 2004 1 #Error Dave 2007 3 2006 2 #Error 2004 1 Dave 2008 5 2007 3 2006 2 #Error Dave 2009 0 2008 5 2007 3 2006 2 Does that make sense? Why am I getting the appropriate NULL val for the non-existent YEARs, but an #Error for the non-existent VALs? (This is Access 2000. Conditional statements like "IIf(a1.val is null, -999, a1.val)" do not seem to do anything.) EDIT: It turns out that the errors are somehow caused by the fact that A_TABLE is actually a query. When I put all the data into an actual table and run the same query, everything shows up as it should. Thanks for the help, everyone.

    Read the article

  • NOT A DUPLICATE! VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • Microsoft products such as Visual Studio 2010 does not require to enter serial number

    - by MainMa
    Hi, I am member of WebsiteSpark and was member of DreamSpark. Both programs enable to download software and provide serial keys to use. Some software like Windows Server has an ISO file to download and a serial number displayed on the website which I must enter during installation. Some other software does not have any serial key. For example, when I downloaded Visual Studio 2010, there was just a link to an ISO file. During installation, there was no such a field as serial number (whereas Visual Studio 2008 had this field at the beginning of installation process). There is the same thing with SQL Server 2008 and Microsoft Expression Studio 3. Even when I've downloaded the public trial RTM version of Windows Seven Enterprise, there were no serial number to enter. I don't think that such expensive products as SQL Server 2008 Enterprise are delivered without serials and online validation, so I suppose that the serial is embedded into the product itself, either in installation binaries or in a separate config file, so is already in the ISO I download so I do not have to enter it. So my question is, how it is done technically? Is each 2 GBs ISO generated on-demand on the server to embed a serial each time this ISO is requested? I suppose that if it is done, it has a huge impact on servers performance (no caching, no streaming...), so what may be the techniques used behind? I want to implement the same feature in a product I intend to ship (to simplify installation by avoiding to ask to enter serial number), but I really don't see how to do it with low impact on server performance.

    Read the article

  • WCF cross-domain policy security error

    - by George2
    Hello everyone, I am using VSTS 2008 + C# + WCF + .Net 3.5 + Silverlight 3.0. I host Silverlight control in an html page and debug it from VSTS 2008 (press F5, then run in VSTS 2008 built-in ASP.Net development web server), then call another WCF service (hosted in another machine running IIS 7.0 + Vista). The WCF service is very simple, just return a constant string to client. When invoking the WCF service from Silverlight, I got the following error message, An error occurred while trying to make a request to URI 'https://LabTest/Test.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. Here is the clientaccesspolicy.xml file, anything wrong? <?xml version="1.0" encoding="utf-8" ?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="*"> <domain uri="*"> </domain> </allow-from> <grant-to> <resource path="/" include-subpaths="true"></resource> </grant-to> </policy> </cross-domain-access> </access-policy> thanks in advance, George

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Sharepoint 2010, 404 error after installation

    - by Tommy Jakobsen
    Running Windows Server 2008 Standard R2, SQL Server 2008 Enterprise, Team Foundation Server 2010, I installed Sharepoint Server 2010 (single server). It installed correctly, and the wizard configured it without errors. When accessing the sharepoint server through http://localhost/ I get a 404 error. I also get a 404 when trying to access the admin interface on port 42620. Sharepoint, TFS and Reporting services are the only application on my IIS. NOT sharing the same port, so that can't be the error. Do you have any ideas what the problem can be? Is there some way that I can debug this?

    Read the article

  • Reporting Services Returning HTTP 401 Unauthorized

    - by Chris Arnold
    I have just ported an existing ASP.NET application to a new web server (Windows Server 2008 R2 and SQL Server 2008). It is successfully running on 4 other servers of varying O/S (which I also setup). My ASP.NET app calls into the Reporting Services Web Service (ReportExecution2005.asmx) to generate a report and save it as a pdf to the file system. I consistently receive "System.Net.WebException - The request failed with HTTP status 401: Unauthorized." In UTTER desperation I have performed the following... Granted all Users complete access to SSRS via the Reports web page. Granted all Users 'Full control' to <%ProgramFiles%\Microsoft SQL Server\MSRS10.MSSQLSERVER I am not a network / server specialist but I'm the only one that can deal with this and it's driving me batty. Help!

    Read the article

  • Stable reverse port forwarding in SSH and stale sessions

    - by Vi
    Using VPS to forward ports behind NAT: for((;;)) { ssh -R 2222:127.0.0.1:22 [email protected]; sleep 10; } When connection is broken somehow and it is reconnecting. Warning: remote port forwarding failed for listen port 2222 Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 I type: vi@vi-server:~$ killall sshd Connection to vi-server.org closed by remote host. Connection to vi-server.org closed. Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 vi@vi-server:~$ Now it's OK. How it's simpler to make this automatic?

    Read the article

  • .Net Framework corrupted

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • .Net Framework currputed

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • IIS 7.0: Requiring Client Certificates causes error 500 and "page cannot be displayed"

    - by user48443
    I have two Windows 2008 x86 servers running IIS 7.0, one site on each server; both sites are SSL-enabled, using DoD-issued certificates. Both sites are accessible via https over port 443, but fail the moment Client Certificates are set to Require or Accept. IIS log records error 500.0.64 but nothing else. I have several Windows 2008 IIS 7 x64 servers that require client certificates and they are working as expected; it's just the two x86 servers that are being problematic.

    Read the article

  • How to sysprep SQL Server Express?

    - by Jim
    We plan to deploy Hyper-V VHD with Windows Server 2008 R2 and SQL Server 2012 Express installed to multiple hosts. From my understanding, the correct way to do this is to install SQL Server in prepartion mode, sysprep Windows, then complete SQL Server installation when the VHD is deployed. I mostly followed the process in this blog post: http://sethusrinivasan.com/category/sysprep/ However, after the VHD is deployed, I'm unable to complete the SQL Server installation. It keeps saying "Upgrade matrix is incorrect". It seems that it's trying to upgrade itself to Enterprise edition (I was asked for product key during install, but I skipped it). Could anyone share their experience in deploying VHDs with SQL Server (we're fine with either SQL Server 2008 R2 or 2012)? I think the source of my issue is because I can't select "Express Edition" when entering the product key at the completion stage, so the installation is trying to do an upgrade to Enterprise Edition. I have no idea why the drop down list is empty.

    Read the article

  • Splwow64 with TS Easy Print

    - by Tim Brigham
    I have an application (Sage MIP Fund Accounting) which exports data to Excel. In this process it uses an internal print driver. Since we upgraded from 2008 to 2008 R2 this export process causes system hangs. This has been isolated down to the splwow64 executable hanging while the Excel document is building. If I kill the spwow64 executable things function properly (I just can't print it once completed). This only occurs while using printer redirection using the Remote Desktop Easy Print function - if I pull the printer redirection things work exactly as expected. I've spent the last couple hours looking at hotfixes or driver upgrades since this appears to be a problem specifically with how the Remote Desktop Easy Printer printer is functioning. Is anyone aware of a hotfix which would be applicable in this situation? I don't want to grab every hotfix for redirected printing and start throwing them out there.

    Read the article

  • Exchange 2010 to Exchange 2010 Public Folder Replication

    - by Archit Baweja
    We have 2 exchange servers in our org. MX1 and MX2. I'm trying to replicate all MX1 public folders to MX2. I've setup replication for all the toplevel folders to include the MX2 server. However no public folders are being replicated. The event log does not show any errors. I've set the diagnostic level for all public folder diagnostics to Highest using get-eventloglevel "MSExchangeIS\9001 Public\*" | set-eventloglevel -Level Expert However besides a 3092 event ID (type: 0x2) generated on MX1 (the source server), there are no events being generated that would notify me of any issues. Some technical details. MX1 is Windows 2008 Standard, MX2 is Windows 2008 Enterprise (eval mode right now).

    Read the article

  • How to install a new TFS checkin policy on a TFS 2010 server?

    - by rhart
    Hi, We've recently upgraded our TFS server to TFS 2010 from 2008. We've been researching a couple new add-on checkin policies we want to install. The only problem is that all documentation I can find on adding new policies to the server appears to be specific to TFS 2008 or earlier. Those steps involve adding new keys in the registry which do not exist on our 2010 TFS server. Does anybody know where the process to install new checkin policies on a TFS 2010 server so they can be applied to Team Projects is documented? Thanks!

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >