Search Results

Search found 5723 results on 229 pages for 'turing machines'.

Page 200/229 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Problems with Continuous Integration (CI) in TFS during Build Automation?

    - by Steve Johnson
    Hi all, I am using TFS 2008 and Visual Studio and my boss has instructed me to implement Build Automation for Development and Release builds for a web Project. I am a total newbie in Build Automation. There are multiple developers working on the project on different machines using Visual Studio 2008 team System. Source is already being maintained on TFS 2008. SQL Server in Use is SQL Server 2000 and hosted IIS is IIS 7.5 on Windows Server 2008 x64. I have searched over the net and found Continuous Integration and Nightly Builds as two important Build Automation techniques. I was just wondering of any disadvantages associated with both the methodologies (CI and Nightly Builds). If someone could guide me to a working tutorial that explains both techniques the it would be quite helpful. Please also tell the requirements of IIS, SQL Server and any other that might be pre-requisite to implement build automation. Also i would like to know whether there are other techniques that are better then CI? Replies and discussion much appreciated. Thanks

    Read the article

  • Visual Studio Macros on 64 bit fail with COM error

    - by bruce.kinchin
    I'm doing some javascript development and found a cool macro to region my code ("Using #region Directive With JavaScript Files in Visual Studio"). I used this on my 32 bit box, and it worked first time. (Visual Studio 2008 SP1, Win7) For easy of reference the macro is: Option Strict Off Option Explicit Off Imports System Imports EnvDTE Imports EnvDTE80 Imports System.Diagnostics Imports System.Collections Public Module JsMacros Sub OutlineRegions() Dim selection As EnvDTE.TextSelection = DTE.ActiveDocument.Selection Const REGION_START As String = "//#region" Const REGION_END As String = "//#endregion" DTE.ExecuteCommand("Edit.StopOutlining") selection.SelectAll() Dim text As String = selection.Text selection.StartOfDocument(True) Dim startIndex As Integer Dim endIndex As Integer Dim lastIndex As Integer = 0 Dim startRegions As Stack = New Stack() Do startIndex = text.IndexOf(REGION_START, lastIndex) endIndex = text.IndexOf(REGION_END, lastIndex) If startIndex = -1 AndAlso endIndex = -1 Then Exit Do End If If startIndex <> -1 AndAlso startIndex < endIndex Then startRegions.Push(startIndex) lastIndex = startIndex + 1 Else ' Outline region ... selection.MoveToLineAndOffset(CalcLineNumber(text, CInt(startRegions.Pop())), text.Length) selection.MoveToLineAndOffset(CalcLineNumber(text, endIndex) + 1, 1, True) selection.OutlineSection() lastIndex = endIndex + 1 End If Loop selection.StartOfDocument() End Sub Private Function CalcLineNumber(ByVal text As String, ByVal index As Integer) Dim lineNumber As Integer = 1 Dim i As Integer = 0 While i < index If text.Chars(i) = vbCr Then lineNumber += 1 i += 1 End If i += 1 End While Return lineNumber End Function End Module I then tried to use the same macro on two separate 64 bit machines (Win7 x64), identical other than the 64 bit OS version and it fails to work. Stepping through it with the Visual Studio Macros IDE, it fails the first time on the DTE.ExecuteCommand("Edit.StopOutlining") line with a COM error (Error HRESULT E_FAIL has been returned from a call to a COM component). If I attempt to run it a second time, I can run it from the Macro Editor with no issue, but not from within Visual Studio with the macro explorer 'run macro' command. I have reviewed the following articles without finding anything helpful: Stackoverflow: Visual Studio 2008 macro only works from the Macro IDE, not the Macro Explorer Recorded macro does not run; Failing on DTE.ExecuteCommand Am I missing something dumb?

    Read the article

  • SQL Compact error: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found

    - by Ciaran Bruen
    Hi - I'm developing a Winforms application using Visual Studio 2008 C# that uses a SQL compact 3.5 database on the client. The client will most likely be 32 bit XP or Vista machines. I'm using a standard Windows Installer project that creates an msi file and setup.exe to install the app on a client machine. I'm new to SQL compact so I haven't had to distribute a client database like this before now. When I run the setup.exe (on new Windows XP 32 bit with SP2 and IE 7) it installs fine but when I run the app I get the error below: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found I spent a few hours searching this error already but all I can find are issues relating to installing on 64 bit Windows, none relating to normal 32 bit that I'm using. The install app copies the all the dependant files that it found into the specified install directory, including the System.Data.SqlServerCe.dll file (assembly version 3.5.1.0). The database file is in a directory called 'data' off the application directory, and the connection string for it is <add name="Tickets.ieOutlet.Properties.Settings.TicketsLocalConnectionString" connectionString="Data Source=|DataDirectory|\data\TicketsLocal.sdf" providerName="Microsoft.SqlServerCe.Client.3.5" /> Some questions I have: should the app be able to find the dll if it's in the same directory i.e. local to the app, or do I need to install it in the GAC? (If so cam I use the Windows Installer to install a dll in the GAC?) is there anything else I need to distribute with the app in order to use a Sql Compact database? there are other dlls also such as MS interop for exporting data to Excel on the client. Do these need to be installed in the GAC or will locating them in the application directory suffice? TIA, Ciaran.

    Read the article

  • Solutions for working with multiple branches in ASP.Net

    - by Corey McKinnon
    At work, we are often working on multiple branches of our product at one time. For example, right now, we have a maintenance branch, a branch with code just going to QA, and a branch for a new major initiative, that won't be merged for some time now. Our web project is set up to use IIS, so every time we switch to a different branch, we have to go in to IIS Admin and change the path on the virtual directory, then reset IIS, and sometimes even restart Visual Studio, to avoid getting build errors. Is there any way to simplify this, other than not having our web project set up as a virtual directory? I'm not sure we want to make that change at this point. What do you do to make this easier, assuming you do this? Corey @RedWolves, virtual machines would definitely work, but I'm not sure it would be any simpler, especially for some of the other developers on my team, which is partly why I'm looking for more simplicity. @Dan, we're not able to change source control providers, unfortunately. @pix0r, that's something I'll try when I get back to work. Thanks for the suggestion. @Haacked, I'll have to give that a try too, but I think we have some issues with why that won't work (I can't remember exactly why right now; this application was originally written in .Net 1.1, pre-Cassini, and I can't remember if we tried it when we upgraded to 2.0 or not). Thanks all for the responses so far.

    Read the article

  • certificate issues running app in windows 7 ?

    - by Jurjen
    Hi, I'm having some problems with my App. I'm using 'org.mentalis.security' assembly to create a certificate object from a 'pfx' file, this is the line of code where the exception occurs : Certificate cert = Certificate.CreateFromPfxFile(publicKey, certificatePassword); this has always worked and still does in production, but for some reason it throws an exception when run in windows 7 (tried it on 2 machines). CertificateException : Unable to import the PFX file! [error code = -2146893792] I can't find much on this message via google, but when checking the EventViewer I get an 'Audit Failure' every time this exception occurs: Event ID = 5061 Source = Microsoft Windows Security Task Category = system Integrity Keywords = Audit Failure Cryptographic operation. Subject: Security ID: NT AUTHORITY\IUSR Account Name: IUSR Account Domain: NT AUTHORITY Logon ID: 0x3e3 Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: VriendelijkeNaam Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x2 ` I'm not sure why this isn't working on win 7, I've never had problems when I was running on Vista with this. I am running VS2008 as administrator but I guess that maybe the ASP.NET user doesn't have sufficient rights or something. It's pretty strangs that the 'Algorithm name' is 'Not Available' can anyone help me with this... TIA, Jurjen de Groot

    Read the article

  • jQuery's getScript and the local file system-- limitations/alternatives?

    - by user210099
    Right now I'm working on a help-system which is based on a local file system. It is intended to be shipped with a product which is not used on internet-enabled machines, so it must be a stand alone webpage, without any dependencies on a web server. This introduces a few challenges. Namely, the directory structure that the files exist in require navigating "up and over" to access some .js files which are required to display the help system. This use to be implemented using the jQuery getScript function, but I have ran into some problems using this on the local file system. At first glance, it seemed that if my webpage was being served out of the C:/dev/webpage/html/ directory, and the files I needed were in C:/dev/webpage/js/(topic)/file.js, I could just build an absolute path (file:///...) and pass that into the getScript function. However, after testing this, it does not seem that the getScript function will let me go up a level from the html directory (where the html file is located which has the main code for the webpage). Unfortunately, I can not change the directory structure, nor can I change the .js file structure/format. Is there an alternative for loading/executing javascript files that are in a file structure where I need to go "up and over"? Thanks,

    Read the article

  • VS2010 compiles solution without errors, msbuild fails: "fatal error CS0002: Unable to load message string from resources"

    - by Nathan Ridley
    I'm having a lot of trouble trying to track down the cause of this error message. I have a large Visual Studio 2010 solution which compiles without error on my local machine but on the build server, msbuild fails on one of the projects with the error: fatal error CS0002: Unable to load message string from resources Here's the red error section at the end: Build FAILED. "C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj" (default target) (9) -> (CoreCompile target) -> CSC : fatal error CS0002: Unable to load message string from resources. [C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj] 0 Warning(s) 1 Error(s) The entire msbuild output from the build server is here: http://pastie.org/3660842 What does the error generally refer to, that would cause it to build locally but not on the build server? UPDATE I have just run msbuild /version on both machines and it turns out the .net framework versions are very slightly different. Local machine is 4.0.30319.488 and build server is 4.0.30319.1. I'm about to run windows update on the server to allow it to install some updates, as several seem to be .net framework-related, so I'll see if that makes a difference. UPDATE Installing the updates didn't help. Just remembered I copied up csc.exe from the async preview a little while ago in order to facilitate async compilation (the actual async preview had failed to install on the server due to visual studio not being there, but installing visual studio team viewer seems to have fixed that, so i've just run the proper async ctp3 installer to see if that makes a difference.

    Read the article

  • Advice on optimzing speed for a Stored Procedure that uses Views

    - by Belliez
    Based on a previous question and with a lot of help from Damir Sudarevic (thanks) I have the following sql code which works great but is very slow. Can anyone suggest how I can speed this up and optimise for speed. I am now using SQL Server Express 2008 (not 2005 as per my original question). What this code does is retrieves parameters and their associated values from several tables and rotates the table in a form that can be easily compared. Its great for one of two rows of data but now I am testing with 100 rows and to run GetJobParameters takes over 7 minutes to complete? Any advice is gratefully accepted, thank you in advanced. /*********************************************************************************************** ** CREATE A VIEW (VIRTUAL TABLE) TO ALLOW EASIER RETREIVAL OF PARMETERS ************************************************************************************************/ CREATE VIEW dbo.vParameters AS SELECT m.MachineID AS [Machine ID] ,j.JobID AS [Job ID] ,p.ParamID AS [Param ID] ,t.ParamTypeID AS [Param Type ID] ,m.Name AS [Machine Name] ,j.Name AS [Job Name] ,t.Name AS [Param Type Name] ,t.JobDataType AS [Job DataType] ,x.Value AS [Measurement Value] ,x.Unit AS [Unit] ,y.Value AS [JobDataType] FROM dbo.Machines AS m JOIN dbo.JobFiles AS j ON j.MachineID = m.MachineID JOIN dbo.JobParams AS p ON p.JobFileID = j.JobID JOIN dbo.JobParamType AS t ON t.ParamTypeID = p.ParamTypeID LEFT JOIN dbo.JobMeasurement AS x ON x.ParamID = p.ParamID LEFT JOIN dbo.JobTrait AS y ON y.ParamID = p.ParamID GO -- Step 2 CREATE VIEW dbo.vJobValues AS SELECT [Job Name] ,[Param Type Name] ,COALESCE(cast([Measurement Value] AS varchar(50)), [JobDataType]) AS [Val] FROM dbo.vParameters GO /*********************************************************************************************** ** GET JOB PARMETERS FROM THE VIEW JUST CREATED ************************************************************************************************/ CREATE PROCEDURE GetJobParameters AS -- Step 3 DECLARE @Params TABLE ( id int IDENTITY (1,1) ,ParamName varchar(50) ); INSERT INTO @Params (ParamName) SELECT DISTINCT [Name] FROM dbo.JobParamType -- Step 4 DECLARE @qw TABLE( id int IDENTITY (1,1) , txt nchar(300) ) INSERT INTO @qw (txt) SELECT 'SELECT' UNION SELECT '[Job Name]' ; INSERT INTO @qw (txt) SELECT ',MAX(CASE [Param Type Name] WHEN ''' + ParamName + ''' THEN Val ELSE NULL END) AS [' + ParamName + ']' FROM @Params ORDER BY id; INSERT INTO @qw (txt) SELECT 'FROM dbo.vJobValues' UNION SELECT 'GROUP BY [Job Name]' UNION SELECT 'ORDER BY [Job Name]'; -- Step 5 --SELECT txt FROM @qw DECLARE @sql_output VARCHAR (MAX) SET @sql_output = '' -- NULL + '' = NULL, so we need to have a seed SELECT @sql_output = -- string to avoid losing the first line. COALESCE (@sql_output + txt + char (10), '') FROM @qw EXEC (@sql_output) GO

    Read the article

  • How to debug browser crash when running Silverlight app

    - by onedozenbagels
    I am on a team of three people who are developing a Silverlight application. On two of our developers' machines the app seems to randomly crash. It never crashes on the third developer's machine. The nature of the crash is that internet explorer just dies with an "Internet Explorer has stopped working" message. The problem details look like this: Problem Event Name: BEX Application Name: IEXPLORE.EXE Application Version: 8.0.6001.18882 Application Timestamp: 4b3ed243 Fault Module Name: StackHash_2cd8 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 0024df00 Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.0.6002.2.2.0.256.6 Locale ID: 1033 Additional Information 1: 2cd8 Additional Information 2: 0c337fa6c2057a9dbce1860c5e2d8315 Additional Information 3: e13b Additional Information 4: 5da012709e52526a1af19795dc4a33fd Then windows displays this message: "To help protect your computer, Data Execution Prevention has closed Internet Explorer." If I am attached to the app with the Visual Studio debugger the only information I get is this line in the output window: "The program '[2140] iexplore.exe: Silverlight' has exited with code -1073741819 (0xc0000005)." How should I go about debugging this problem? I'm not really sure where to start.

    Read the article

  • How are you taking advantage of Multicore?

    - by tgamblin
    As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few. My questions are: How does this affect your software roadmap? I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc). What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else? What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores? If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too. Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too. Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?

    Read the article

  • A problem happened during install the ruby

    - by Alex
    I’m a freshman on Ruby and now trying to install ruby on my machine according to the Tutorial on http://wiki.openqa.org/display/WTR/Tutorial However, after I installed the ruby186-26, and run the command “gem update --system”, the following error occurred: C:\Documents and Settings\e482090\Desktopgem update --system c:/ruby/lib/ruby/site_ruby/1.8/rubygems/config_file.rb:51:in initialize': Inval id argument - <Not Set>/.gemrc (Errno::EINVAL) from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/config_file.rb:51:inopen' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/config_file.rb:51:in initi alize' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/gem_runner.rb:36:innew' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/gem_runner.rb:36:in do_con figuration' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/gem_runner.rb:25:inrun' from c:/ruby/bin/gem:23 C:\Documents and Settings\e482090\Desktopgem install watir c:/ruby/lib/ruby/site_ruby/1.8/rubygems/config_file.rb:51:in initialize': Inval id argument - <Not Set>/.gemrc (Errno::EINVAL) from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/config_file.rb:51:inopen' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/config_file.rb:51:in initi alize' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/gem_runner.rb:36:innew' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/gem_runner.rb:36:in do_con figuration' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/gem_runner.rb:25:inrun' from c:/ruby/bin/gem:23 Meanwhile, we have tried this on other machines and the result turned out ok. Thus, my question is why the error happened on my pc? Have you met this kind of error before?

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • How can you change Network settings (IP Address, DNS, WINS, Host Name) with code in C#

    - by rathkopf
    I am developing a wizard for a machine that is to be used as a backup of other machines. When it replaces an existing machine, it needs to set its IP address, DNS, WINS, and host name to match the machine being replaced. Is there a library in .net (C#) which allows me to do this programatically? There are multiple NICs, each which need to be set individually. EDIT Thannk you TimothyP for your example. It got me moving on the right track and the quick reply was awesome. Thanks balexandre. Your code is perfect. I was in a rush and had already adapted the example TimothyP linked to, but I would have loved to have had your code sooner. I've also developed a routine using similar techniques for changing the computer name. I'll post it in the future so subscribe to this questions RSS feed if you want to be informed of the update. I may get it up later today or on Monday after a bit of cleanup.

    Read the article

  • Flash video slooow in AIR 2 HTMLLoader component

    - by shane
    I am working on a full screen kiosk application in Flex 4/Air 2 using Flash Builder 4. We have a company training website which staff can access via the kiosk, and the main content is interactive flash training videos. Our target machines are by no means 'beefy', they are Atom n270s @ 1.6Ghz with 1Gb RAM. As it stands the videos are all but unusable when used from within the Air application, the application becomes completely unresponsive (100% cpu usage, click events take approx 5-10 seconds to register). So far I have tried: increasing the default frame rate from 24fps to 60. No improvement. nativeWindow.stage.frameRate = 60; running the videos in a stripped down version of my app, just a full screen HTMLLoader component pointed at the training website. No better than before. disabled hyper threading. The Atom CPU is split into two virtual cores, and the AIR app was only able to use one thread so maxed out at 50% CPU usage. Since the kiosk will only run the AIR app I am happy to loose hyper threading to increase the performance of the Air app. Marginal Improvement. The same website with the same videos is responsive if viewed in ie7 on the same machine, although Internet Explorer takes advantage of the CPU’s hyper threading. The flash videos are built with Adobe Captivate and from what I understand employee JavaScript to relay results back to the server. I will add more information about the video content asap as the training guru is back in the office later this week.

    Read the article

  • jvm version for Websphere 6.1.0.23on Solaris

    - by dr jerry
    Hi I'm at big financial institute and we've an application running on Websphere 6.1. on Solaris. Due to MQ Connectivity we had to install fixpack 6.1.0.23. Unfortunately this broke an ejb (1.1) which is still there as legacy (Test missed it). [3/23/10 11:33:18:703 CET] 00000055 EJBContainerI E WSVR0068E: Attempt to start EnterpriseBean EventRisk_1.0.0#EventRiskEJB.jar#PolicyDataManager failed with exception: java.lang.NoSuchMethodError: com.ibm.ejs.csi.ResRefListImpl.(Lorg/eclipse/jst/j2ee/ejb/EnterpriseBean;Lcom/ibm/ejs/models/base/bindings/ejbbnd/EnterpriseBeanBinding;Lcom/ibm/ejs/models/base/extensions/ejbext/EnterpriseBeanExtension;)V at com.ibm.ws.metadata.ejb.EJBMDOrchestrator.finishBMDInit(EJBMDOrchestrator.java:1364) at com.ibm.ws.runtime.component.EJBContainerImpl.finishDeferredBeanMetaData(EJBContainerImpl.java:4829) at com.ibm.ws.runtime.component.EJBContainerImpl$3.run(EJBContainerImpl.java:4631) at java.security.AccessController.doPrivileged(Native Method) at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:125) at com.ibm.ws.runtime.component.EJBContainerImpl.initializeDeferredEJB(EJBContainerImpl.java:4627) at com.ibm.ejs.container.HomeOfHomes.getHome(HomeOfHomes.java:390) at com.ibm.ejs.container.HomeOfHomes.internalCreateWrapper(HomeOfHomes.java:938) at com.ibm.ejs.container.EJSContainer.createWrapper(EJSContainer.java:4783) at com.ibm.ejs.container.WrapperManager.faultOnKey(WrapperManager.java:545) at com.ibm.ejs.util.cache.Cache.findAndFault(Cache.java:498) at com.ibm.ejs.container.WrapperManager.keyToObject(WrapperManager.java:489) We cannot reproduce the issue on our desktop boxes (it all works fine there) and we do not have direct access to our the Solaris machines (dependent on the deployment department) we do suspect a discrepancy on the jvm but we're not sure. My question is two fold: can you confirm IBM's statement that fixpack 6.1.0.23 for solaris indeed runs on jvm 1.5.0_17b04 our installation tells us ./java -version java version "1.5.0_13" But deploy department is not eager to investigate. Do you see some other solution, apart from hiring big blue's con$ultancy? kind regards, Jeroen.

    Read the article

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • Using DPAPI / ProtectedData in a web farm environment with the User Store

    - by Lachman
    I was wondering if anyone had successfully used DPAPI with a user store in a web farm enviroment? Because our application is a recently converted from 1.1 to 2.0 asp.net app, we're using a custom wrapper which directly calls the CryptUnprotect methods. But this should be the same as the ProtectedData method available in the 2.0 framework. Because we are operating in a web farm environment, we can't guarantee that the machine that did the encryption is going to be the one decrypting it. (Also because machine failures shouldn't destroy our encrypted data). So what we have is a serviced component that runs in a service under a particular user account on each one of our web boxes. This user is a set up to have a roaming profile, as per the recomendation. The problem we have is that info encrypted on one machine can not be decrypted on another, this fails with the win32 error 'Key not valid for use in specified state'. I suspect that this is because I've made a mistake by having the encryption service running as the user on multiple machines, hence keeping the user logged in on more than one machine at the same time. If this is the problem, how are other using DPAPI with the User Store in a web farm environment?

    Read the article

  • What are the limitations of the .NET Assembly format?

    - by McKAMEY
    We just ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! So my question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous. Less disk to hit if it is all pregen'd stuff.

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • MSMQ on Win2008 R2 won’t receive messages from older clients

    - by Graffen
    Hi all I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • Possible Data Execution Prevention problem in Windows 7

    - by Joel in Gö
    I have a serious problem with my .Net program. It calls a native dll, and then crashes instantly because it can't find a native method. This is behaviour we have seen before, whereby the C# compiler, in its infinite wisdom, sets the flag that the program is DEP compatible, even if it calls a native dll which patently is not. We have the standard workaround for this, where the flag is set to Not DEP Compatible in a post-build step, and this works fine. Everywhere except on my machine. I have Windows 7 32bit, and the program works fine on the Win 7 64bit machines that we have, as well as on Vista and XP; we have not yet been able to check on another Win7 32bit. However, on my machine the DataExecutionPolicy_SupportPolicy is 0, i.e. we have successfully switched DEP off. The dll in question also works fine when called from a native program. We are running out of ideas... any help would be much appreciated!

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • SQL deadlock on delete then bulk insert

    - by StarLite
    I have an issue with a deadlock in SQL Server that I haven't been able to resolve. Basically I have a large number of concurrent connections (from many machines) that are executing transactions where they first delete a range of entries and then re-insert entries within the same range with a bulk insert. Essentially, the transaction looks like this BEGIN TRANSACTION T1 DELETE FROM [TableName] WITH( XLOCK HOLDLOCK ) WHERE [Id]=@Id AND [SubId]=@SubId INSERT BULK [TableName] ( [Id] Int , [SubId] Int , [Text] VarChar(max) COLLATE SQL_Latin1_General_CP1_CI_AS ) WITH(CHECK_CONSTRAINTS, FIRE_TRIGGERS) COMMIT TRANSACTION T1 The bulk insert only inserts items matching the Id and SubId of the deletion in the same transaction. Furthermore, these Id and SubId entries should never overlap. When I have enough concurrent transaction of this form, I start to see a significant number of deadlocks between these statements. I added the locking hints XLOCK HOLDLOCK to attempt to deal with the issue, but they don't seem to be helpling. The canonical deadlock graph for this error shows: Connection 1: Holds RangeX-X on PK_TableName Holds IX Page lock on the table Requesting X Page lock on the table Connection 2: Holds IX Page lock on the table Requests RangeX-X lock on the table What do I need to do in order to ensure that these deadlocks don't occur. I have been doing some reading on the RangeX-X locks and I'm not sure I fully understand what is going on with these. Do I have any options short of locking the entire table here?

    Read the article

  • Force creation of query execution plan

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • org.eclipse.jdt.ui.wizards.NewClassWizardPage available on Linux, but not on the Mac?

    - by Martin Cowie
    Most esteemed host of Eclipse magi .. I am trying to create an instance of the org.eclipse.jdt.ui.wizards.NewClassWizardPage class. I have one project where I do this, and it will compile & run on Linux, but not on a Mac. Both machines are running the Helios edition of Eclipse with the PDE, both were downloaded with the last week. The bundle org.eclipse.jdt.ui is available on the Mac, but for some reason the Mac will not compile the phrase import org.eclipse.jdt.ui.wizards.NewClassWizardPage; Saying "The import org.eclipse.jdt.ui.wizards.NewClassWizardPage cannot be resolved". The MANIFEST.MF is a simple one .. Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: RcpTest0 Bundle-SymbolicName: rcpTest0; singleton:=true Bundle-Version: 1.0.0.qualifier Bundle-Activator: rcptest0.Activator Require-Bundle: org.eclipse.ui, org.eclipse.core.runtime, org.eclipse.core.resources, org.eclipse.jdt, org.eclipse.jdt.core, org.eclipse.jdt.ui Bundle-ActivationPolicy: lazy Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Your clues & boos are all most welcome.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >