Search Results

Search found 666 results on 27 pages for 'profiler'.

Page 25/27 | < Previous Page | 21 22 23 24 25 26 27  | Next Page >

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • DataTable identity column not set after DataAdapter.Update/Refresh on table with "instead of"-trigge

    - by Arno
    Within our unit tests we use plain ADO.NET (DataTable, DataAdapter) for preparing the database resp. checking the results, while the tested components themselves run under NHibernate 2.1. .NET version is 3.5, SqlServer version is 2005. The database tables have identity columns as primary keys. Some tables apply instead-of-insert/update triggers (this is due to backward compatibility, nothing I can change). The triggers generally work like this: create trigger dbo.emp_insert on dbo.emp instead of insert as begin set nocount on insert into emp ... select @@identity end The insert statement issued by the ADO.NET DataAdapter (generated on-the-fly by a thin ADO.NET wrapper) tries to retrieve the identity value back into the DataRow: exec sp_executesql N' insert into emp (...) values (...); select id, ... from emp where id = @@identity ' But the DataRow's id-Column is still 0. When I remove the trigger temporarily, it works fine - the id-Column then holds the identity value set by the database. NHibernate on the other hand uses this kind of insert statement: exec sp_executesql N' insert into emp (...) values (...); select scope_identity() ' This works, the NHibernate POCO has its id property correctly set right after flushing. Which seems a little bit counter-intuitive to me, as I expected the trigger to run in a different scope, hence @@identity should be a better fit than scope_identity(). So I thought no problem, I will apply scope_identity() instead of @@identity under ADO.NET as well. But this has no effect, the DataRow value is still not updated accordingly. And now for the best part: When I copy and paste those two statements from SqlServer profiler into a Management Studio query (that is including "exec sp_executesql"), and run them there, the results seem to be inverse! There the ADO.NET version works, and the NHibernate version doesn't (select scope_identity() returns null). I tried several times to verify, but to no avail. Of course this just shows the resultset coming from the database, whatever happens inside NHibernate and ADO.NET is another topic. Also, several session properties defined by T-SQL SET are different in the two scenarios (Management Studio query vs. application at runtime) This is a real puzzle to me. I would be happy about any insights on that. Thank you!

    Read the article

  • SQL server timeout 2000 from C# .NET

    - by Johnny Egeland
    I have run into a strange problem using SQL Server 2000 and two linked server. For two years now our solution has run without a hitch, but suddenly yesterday a query synchronizing data from one of the databases to the other started timing out. I connect to a server in the production network, which is linked to a server containing orders I need data from. The query contains a few joins, but basically this summarizes what is done: INSERT INTO ProductionDataCache (column1, column2, ...) SELECT tab1.column1, tab1.column2, tab2.column1, tab3.column1 ... FROM linkedserver.database.dbo.Table1 AS tab1 JOIN linkedserver.database.dbo.Table2 AS tab2 ON (...) JOIN linkedserver.database.dbo.Tabl32 AS tab3 ON (...) ... WHERE tab1.productionOrderId = @id ORDER BY ... Obviously my first attempt to fix the problem was to increase the timeout limit from the original 5 minutes. But when I arrived at 30 minutes and still got a timeout, I started to suspect something else was going on. A query just does not go from executing in less than 5 minutes to over 30 minutes over night. I outputted the SQL query (which was originally in the C# code) to my logs, and decided to execute the query in the Query Analyzer directly on the database server. To my big surprise, the query executed correctly in less than 10 seconds. So I isolated the SQL execution in a simple test program, and observed the same query time out both on the server originally running this solution AND when running it locally on the database server. Also I have tried to create a Stored Procedure and execute this from the program, but this also times out. Running it in Query Analyzer works fine in less than a few seconds. It seems that the problem only occurs when I execute this query from the C# program. Has anyone seen such behavior before, and found a solution for it? UPDATE: I have now used SQL Profiler on the server. The obvious difference is that when executing the query from the .NET program, it shows up in the log as "exec sp_executesql N'INSERT INTO ...'", but when executing from Query Analyzer it occurs as a normal query in the log. Further I tried to connect the SQL Query Analyzer using the same SQL user as the program, and this triggered the problem in Query Analyzer as well. So it seems the problem only occurs when connecting via TCP/IP using a sql user.

    Read the article

  • Facing Memory Leaks in AES Encryption Method.

    - by Mubashar Ahmad
    Can anyone please identify is there any possible memory leaks in following code. I have tried with .Net Memory Profiler and it says "CreateEncryptor" and some other functions are leaving unmanaged memory leaks as I have confirmed this using Performance Monitors. but there are already dispose, clear, close calls are placed wherever possible please advise me accordingly. its a been urgent. public static string Encrypt(string plainText, string key) { //Set up the encryption objects byte[] encryptedBytes = null; using (AesCryptoServiceProvider acsp = GetProvider(Encoding.UTF8.GetBytes(key))) { byte[] sourceBytes = Encoding.UTF8.GetBytes(plainText); using (ICryptoTransform ictE = acsp.CreateEncryptor()) { //Set up stream to contain the encryption using (MemoryStream msS = new MemoryStream()) { //Perform the encrpytion, storing output into the stream using (CryptoStream csS = new CryptoStream(msS, ictE, CryptoStreamMode.Write)) { csS.Write(sourceBytes, 0, sourceBytes.Length); csS.FlushFinalBlock(); //sourceBytes are now encrypted as an array of secure bytes encryptedBytes = msS.ToArray(); //.ToArray() is important, don't mess with the buffer csS.Close(); } msS.Close(); } } acsp.Clear(); } //return the encrypted bytes as a BASE64 encoded string return Convert.ToBase64String(encryptedBytes); } private static AesCryptoServiceProvider GetProvider(byte[] key) { AesCryptoServiceProvider result = new AesCryptoServiceProvider(); result.BlockSize = 128; result.KeySize = 256; result.Mode = CipherMode.CBC; result.Padding = PaddingMode.PKCS7; result.GenerateIV(); result.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] RealKey = GetKey(key, result); result.Key = RealKey; // result.IV = RealKey; return result; } private static byte[] GetKey(byte[] suggestedKey, SymmetricAlgorithm p) { byte[] kRaw = suggestedKey; List<byte> kList = new List<byte>(); for (int i = 0; i < p.LegalKeySizes[0].MaxSize; i += 8) { kList.Add(kRaw[(i / 8) % kRaw.Length]); } byte[] k = kList.ToArray(); return k; }

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Execute SQL SP in Excel VBA

    - by TheOCD
    HI I am having problem with getting all the columns back when i execute following code in excel vba. I only get 6 out of 23 columns back. Connection, command etc works fine (i can see exec command in the SQL Profiler), data headers are created for all 23 columns but i only get data for 6 column. Side Note: it's not prod level code, have missed out error handling on purpose, sp works fine in SQL management studio, ASP.Net, C# win form app, it is for Excel 2003 connecting to SQL 2008. Can someone help me troubleshoot it? Dim connection As ADODB.connection Dim recordset As ADODB.recordset Dim command As ADODB.command Dim strProcName As String 'Stored Procedure name Dim strConn As String ' connection string. Dim selectedVal As String 'Set ADODB requirements Set connection = New ADODB.connection Set recordset = New ADODB.recordset Set command = New ADODB.command If Workbooks("Book2.xls").MultiUserEditing = True Then MsgBox "You do not have Exclusive access to the workbook at this time." & _ vbNewLine & "Please have all other users close the workbook and then try again.", vbOKOnly + vbExclamation Exit Sub Else On Error Resume Next ActiveWorkbook.ExclusiveAccess 'On Error GoTo No_Bugs End If 'set the active sheet Set oSht = Workbooks("Book2.xls").Sheets(1) 'get the connection string, if empty just exit strConn = ConnectionString() If strConn = "" Then Exit Sub End If ' selected value, if <NOTHING> just exit selectedVal = selectedValue() If selectedVal = "<NOTHING>" Then Exit Sub End If If Not oSht Is Nothing Then 'Open database connection connection.ConnectionString = strConn connection.Open ' set command stuff. command.ActiveConnection = connection command.CommandText = "GetAlbumByName" command.CommandType = adCmdStoredProc command.Parameters.Refresh command.Parameters(1).Value = selectedVal 'Execute stored procedure and return to a recordset Set recordset = command.Execute() If recordset.BOF = False And recordset.EOF = False Then Sheets("Sheet2").[A1].CopyFromRecordset recordset ' Create headers and copy data With Sheets("Sheet2") For Column = 0 To recordset.Fields.Count - 1 .Cells(1, Column + 1).Value = recordset.Fields(Column).Name Next .Range(.Cells(1, 1), .Cells(1, recordset.Fields.Count)).Font.Bold = True .Cells(2, 1).CopyFromRecordset recordset End With Else MsgBox "b4 BOF or after EOF.", vbOKOnly + vbExclamation End If 'Close database connection and clean up If CBool(recordset.State And adStateOpen) = True Then recordset.Close Set recordset = Nothing If CBool(connection.State And adStateOpen) = True Then connection.Close Set connection = Nothing Else MsgBox "oSheet2 is Nothing.", vbOKOnly + vbExclamation End If

    Read the article

  • do I need to close an audio Clip?

    - by Michael
    have an application that processes real-time data and is supposed to beep when a certain event occurs. The triggering event can occur multiple times per second, and if the beep is already playing when another event triggers the code is just supposed to ignore it (as opposed to interrupting the current beep and starting a new one). Here is the basic code: Clip clickClip public void prepareProcess() { super.prepareProcess(); clickClip = null; try { clipFile = new File("C:/WINDOWS/Media/CHIMES.wav"); ais = AudioSystem.getAudioInputStream(clipFile); clickClip = AudioSystem.getClip(); clickClip.open(ais); fileIsLoaded = true; } catch (Exception ex) { clickClip = null; fileIsLoaded = false; } } public void playSound() { if (fileIsLoaded) { if ((clickClip==null) || (!clickClip.isRunning())) { try { clickClip.setFramePosition(0); clickClip.start(); } catch (Exception ex) { System.out.println("Cannot play click noise"); ex.printStackTrace(); } } } The prepareProcess method gets run once in the beginning, and the playSound method is called every time a triggering event occurs. My question is: do I need to close the clickClip object? I know I could add an actionListener to monitor for a Stop event, but since the event occurs so frequently I'm worried the extra processing is going to slow down the real-time data collection. The code seems to run fine, but my worry is memory leaks. The code above is based on an example I found while searching the net, but the example used an actionListener to close the Clip specifically "to eliminate memory leaks that would occur when the stop method wasn't implemented". My program is intended to run for hours so any memory leaks I have will cause problems. I'll be honest: I have no idea how to verify whether or not I've got a problem. I'm using Netbeans, and running the memory profiler just gave me a huge list of things that I don't know how to read. This is supposed to be the simple part of the program, and I'm spending hours on it. Any help would be greatly appreciated! Michael

    Read the article

  • Report Builder Error (Input string was not in a correct format) when filtering data

    - by JVZ
    Issue: the user is getting an error “The requested list could not be retrieved because the query is not valid or a connection could not be made to the data source.” while trying to filter a reporting using Report Builder 2.0. When you expand the details the message is “Input string was not in a correct format” I verified that when the user clicks to see the filter values the query is being sent to the underlying database (by using profiler). When I build the same report [I'm a server admin] I have no issues and see all the filtered values, which leads me to believe this is a permissions issue. I believe I granted all the correct permissions, see below: Report Server Setup [Project Name] Data Sources Models […other folders] We are using SQL Server 2005. The user belongs to a poweruser domain group which has the following permissions: Home Report Builder View Folders Browser Home/[Project Name] Browser Report Builder View Folders [Data Sources] Browser Content Developer [Models] [Inherit roles from Parent folder] << should be same as [Project Name] folder Report server log is showing the following: w3wp!library!5!04/21/2010-08:47:27:: Call to GetPermissionsAction(/). w3wp!library!a!04/21/2010-08:47:27:: Call to GetPropertiesAction(/, PathBased). w3wp!library!5!04/21/2010-08:47:27:: Call to GetSystemPermissionsAction(). w3wp!library!a!04/21/2010-08:47:27:: Call to ListChildrenAction(/, False). w3wp!library!a!04/21/2010-08:47:27:: Call to GetSystemPropertiesAction(). w3wp!library!a!04/21/2010-08:47:27:: Call to GetSystemPropertiesAction(). w3wp!library!9!04/21/2010-08:47:40:: Call to ListModelPerspectivesAction(). w3wp!library!9!04/21/2010-08:48:22:: Call to GetUserModelAction(//Models/, ). w3wp!library!5!04/21/2010-08:48:45:: Call to GetItemTypeAction(/). w3wp!library!9!04/21/2010-08:48:46:: Call to GetItemTypeAction(/). w3wp!library!5!04/21/2010-08:48:53:: Call to GetItemTypeAction(/). w3wp!library!a!04/21/2010-08:49:21:: Call to ListModelPerspectivesAction(). w3wp!library!b!04/21/2010-08:53:40:: Call to GetUserModelAction(//Models/, ). w3wp!library!5!04/21/2010-08:54:48:: i INFO: Call to RenderFirst( '' ) w3wp!webserver!5!04/21/2010-08:54:48:: i INFO: Processed report. Report='/', Stream='' w3wp!library!9!04/21/2010-08:56:42:: Call to GetItemTypeAction(/). w3wp!library!9!04/21/2010-08:56:42:: Call to GetItemTypeAction(/). w3wp!library!a!04/21/2010-08:56:50:: Call to GetItemTypeAction(/). Any ideas on how to resolve/troubleshoot this issue?

    Read the article

  • CUDA, more threads for same work = Longer run time despite better occupancy, Why?

    - by zenna
    I encountered a strange problem where increasing my occupancy by increasing the number of threads reduced performance. I created the following program to illustrate the problem: #include <stdio.h> #include <stdlib.h> #include <cuda_runtime.h> __global__ void less_threads(float * d_out) { int num_inliers; for (int j=0;j<800;++j) { //Do 12 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; num_inliers += threadIdx.x*5; num_inliers += threadIdx.x*6; num_inliers += threadIdx.x*7; num_inliers += threadIdx.x*8; num_inliers += threadIdx.x*9; num_inliers += threadIdx.x*10; num_inliers += threadIdx.x*11; num_inliers += threadIdx.x*12; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } __global__ void more_threads(float *d_out) { int num_inliers; for (int j=0;j<800;++j) { // Do 4 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } int main(int argc, char* argv[]) { float *d_out = NULL; cudaMalloc((void**)&d_out,sizeof(float)*25000); more_threads<<<780,128>>>(d_out); less_threads<<<780,32>>>(d_out); return 0; } Note both kernels should do the same amount of work in total, the (if threadIdx.x == -1 is a trick to stop the compiler optimising everything out and leaving an empty kernel). The work should be the same as more_threads is using 4 times as many threads but with each thread doing 4 times less work. Significant results form the profiler results are as followsL: more_threads: GPU runtime = 1474 us,reg per thread = 6,occupancy=1,branch=83746,divergent_branch = 26,instructions = 584065,gst request=1084552 less_threads: GPU runtime = 921 us,reg per thread = 14,occupancy=0.25,branch=20956,divergent_branch = 26,instructions = 312663,gst request=677381 As I said previously, the run time of the kernel using more threads is longer, this could be due to the increased number of instructions. Why are there more instructions? Why is there any branching, let alone divergent branching, considering there is no conditional code? Why are there any gst requests when there is no global memory access? What is going on here! Thanks

    Read the article

  • Same data being returned by linq for 2 different executions of a stored procedure?

    - by Paul
    Hello I have a stored procedure that I am calling through Entity Framework. The stored procedure has 2 date parameters. I supply different argument in the 2 times I call the stored procedure. I have verified using SQL Profiler that the stored procedure is being called correctly and returning the correct results. When I call my method the second time with different arguments, even though the stored procedure is bringing back the correct results, the table created contains the same data as the first time I called it. dtStart = 01/08/2009 dtEnd = 31/08/2009 public List<dataRecord> GetData(DateTime dtStart, DateTime dtEnd) { var tbl = from t in db.SP(dtStart, dtEnd) select t; return tbl.ToList(); } GetData((new DateTime(2009, 8, 1), new DateTime(2009, 8, 31)) // tbl.field1 value = 45450 - CORRECT GetData(new DateTime(2009, 7, 1), new DateTime(2009, 7, 31)) // tbl.field1 value = 45450 - WRONG 27456 expected Is this a case of Entity Framework being clever and caching? I can't see why it would cache this though as it has executed the stored procedure twice. Do I have to do something to close tbl? using Visual Studio 2008 + Entity Framework. I also get the message "query cannot be enumerated more than once" a few times every now and then, am not sure if that is relevant? FULL CODE LISTING namespace ProfileDataService { public partial class DataService { public static List<MeterTotalConsumpRecord> GetTotalAllTimesConsumption(DateTime dtStart, DateTime dtEnd, EUtilityGroup ug, int nMeterSelectionType, int nCustomerID, int nUserID, string strSelection, bool bClosedLocations, bool bDisposedLocations) { dbChildDataContext db = DBManager.ChildDataConext(nCustomerID); var tbl = from t in db.GetTotalConsumptionByMeter(dtStart, dtEnd, (int) ug, nMeterSelectionType, nCustomerID, nUserID, strSelection, bClosedLocations, bDisposedLocations, 1) select t; return tbl.ToList(); } } } /// CALLER List<MeterTotalConsumpRecord> _P1Totals; List<MeterTotalConsumpRecord> _P2Totals; public void LoadData(int nUserID, int nCustomerID, ELocationSelectionMethod locationSelectionMethod, string strLocations, bool bIncludeClosedLocations, bool bIncludeDisposedLocations, DateTime dtStart, DateTime dtEnd, ReportsBusinessLogic.Lists.EPeriodType durMainPeriodType, ReportsBusinessLogic.Lists.EPeriodType durCompareToPeriodType, ReportsBusinessLogic.Lists.EIncreaseReportType rptType, bool bIncludeDecreases) { ///Code for setting properties using parameters.. _P2Totals = ProfileDataService.DataService.GetTotalAllTimesConsumption(_P2StartDate, _P2EndDate, EUtilityGroup.Electricity, 1, nCustomerID, nUserID, strLocations, bIncludeClosedLocations, bIncludeDisposedLocations); _P1Totals = ProfileDataService.DataService.GetTotalAllTimesConsumption(_StartDate, _EndDate, EUtilityGroup.Electricity, 1, nCustomerID, nUserID, strLocations, bIncludeClosedLocations, bIncludeDisposedLocations); PopulateLines() //This fills up a list of objects with information for my report ready for the totals to be added PopulateTotals(_P1Totals, 1); PopulateTotals(_P2Totals, 2); } void PopulateTotals(List<MeterTotalConsumpRecord> objTotals, int nPeriod) { MeterTotalConsumpRecord objMeterConsumption = null; foreach (IncreaseReportDataRecord objLine in _Lines) { objMeterConsumption = objTotals.Find(delegate(MeterTotalConsumpRecord t) { return t.MeterID == objLine.MeterID; }); if (objMeterConsumption != null) { if (nPeriod == 1) { objLine.P1Consumption = (double)objMeterConsumption.Consumption; } else { objLine.P2Consumption = (double)objMeterConsumption.Consumption; } objMeterConsumption = null; } } } }

    Read the article

  • MVC Entity Framework Model not returning correct data

    - by quagland
    Hi, Run into a strange problem while writing an ASP.NET MVC site. I have a view in my SQL Server database that returns a few date ranges. The view works fine when running the query in SSMS. When the view data is returned by the Entity Framework Model, It returns the correct number of rows but some of the rows are duplicated. Here is an example of what I have done: SQL Server code: CREATE TABLE [dbo].[A]( [ID] [int] NOT NULL, [PhID] [int] NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_A] PRIMARY KEY CLUSTERED ([ID] ASC)) ON [PRIMARY] go CREATE TABLE [dbo].[B]( [PhID] [int] NOT NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_B] PRIMARY KEY CLUSTERED ( [PhID] ASC )) ON [PRIMARY] go CREATE VIEW C as SELECT A.ID, CASE WHEN A.PhID IS NULL THEN A.FromDate ELSE B.FromDate END AS FromDate, CASE WHEN A.PhID IS NULL THEN A.ToDate ELSE B.ToDate END AS ToDate FROM A LEFT OUTER JOIN B ON A.PhID = B.PhID go INSERT INTO B (PhID, FromDate, ToDate) VALUES (100, '20100615', '20100715') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, NULL, '20100101', '20100201') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, 100, '20100615', '20100715') INSERT INTO B (PhID, FromDate, ToDate) VALUES (101, '20101201', '20101231') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, NULL, '20100801', '20100901') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, 101, '20101201', '20101231') So now, if you select all from C, you get 4 separate date ranges In the Entity Framework Model (which I call 'Core'), the view 'C' is added. in MVC Controller: public class HomeController : Controller { public ActionResult Index() { CoreEntities db = new CoreEntities(); var clist = from c in db.C select c; return View(clist.ToList()); } } in MVC View: @model List<RM.Models.C> @{ foreach (RM.Models.C c in Model) { @String.Format("{0:dd-MMM-yyyy}", c.FromDate) <span>-</span> @String.Format("{0:dd-MMM-yyyy}", c.ToDate) <br /> } } When I run all this, it outputs this: 01-Jan-2010 - 01-Feb-2010 01-Jan-2010 - 01-Feb-2010 01-Aug-2010 - 01-Sep-2010 01-Aug-2010 - 01-Sep-2010 When it should do this (this is what the view returns): 01-Jan-2010 - 01-Feb-2010 15-Jun-2010 - 15-Jul-2010 01-Aug-2010 - 01-Sep-2010 01-Dec-2010 - 31-Dec-2010 Also, I've run the SQL profiler over it and according to that, the query being executed is: SELECT [Extent1].[ID] AS [ID], [Extent1].[FromDate] AS [FromDate], [Extent1].[ToDate] AS [ToDate] FROM (SELECT [C].[ID] AS [ID], [C].[FromDate] AS [FromDate], [C].[ToDate] AS [ToDate] FROM [dbo].[C] AS [C]) AS [Extent1] Which returns the correct data So it seems that the entity framework is doing something to the data in the meantime. To me, everything looks fine! Have I missed something? Cheers, Ben

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • Object relationships

    - by Hammerstein
    This stems from a recent couple of posts I've made on events and memory management in general. I'm making a new question as I don't think the software I'm using has anything to do with the overall problem and I'm trying to understand a little more about how to properly manage things. This is ASP.NET. I've been trying to understand the needs for Dispose/Finalize over the past few days and believe that I've got to a stage where I'm pretty happy with when I should/shouldn't implement the Dispose/Finalize. 'If I have members that implement IDisposable, put explicit calls to their dispose in my dispose method' seems to be my understanding. So, now I'm thinking maybe my understanding of object lifetimes and what holds on to what is just wrong! Rather than come up with some sample code that I think will illustrate my point, I'm going to describe as best I can actual code and see if someone can talk me through it. So, I have a repository class, in it I have a DataContext that I create when the repository is created. I implement IDisposable, and when my calling object is done, I call Dispose on my repository and explicitly call DataContext.Dispose( ). Now, one of the methods of this class creates and returns a list of objects that's handed back to my front end. Front End - Controller - Repository - Controller - Front End. (Using Redgate Memory Profiler, I take a snapshot of my software when the page is first loaded). My front end creates a controller object on page load and then makes a request to the repository sending back a list of items. When the page is finished loading, I call Dispose on the controller which in turn calls dispose on the context. In my mind, that should mean that my connection is closed and that I have no instances of my controller class. If I then refresh the page, it jumps to two 'Live' instances of the controller class. If I look at the object retention graph, the objects I created in my call to the list are being held onto ultimately by what looks like Linq. The controller/repository aside, if I create a list of objects somewhere, or I create an object and return it somewhere, am I safe to just assume that .NET will eventually come and clean things up for me or is there a best practice? The 'Live' instances suggest to me that these are still in memory and active objects, the fact that RMP apparently forces GC doesn't mean anything?

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • 'Timeout Expired' error against local SQL Express on only 2 LINQ Methods...

    - by Refracted Paladin
    I am going to sum up my problem first and then offer massive details and what I have already tried. Summary: I have an internal winform app that uses Linq 2 Sql to connect to a local SQL Express database. Each user has there own DB and the DB stay in sync through Merge Replication with a Central DB. All DB's are SQL 2005(sp2or3). We have been using this app for over 5 months now but recently our users are getting a Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Detailed: The strange part is they get that in two differnt locations(2 differnt LINQ Methods) and only the first time they fire in a given time period(~5mins). One LINQ method is pulling all records that match a FK ID and then Manipulating them to form a Heirarchy View for a TreeView. The second is pulling all records that match a FK ID and dumping them into a DataGridView. The only things I can find in common with the 2 are that the first IS an IEnumerable and the second converts itself from IQueryable - IEnumerable - DataTable... I looked at the query's in Profiler and they 'seemed' normal. They are not very complicated querys. They are only pulling back 10 - 90 records, from one table. Any thoughts, suggestions, hints whatever would be greatly appreciated. I am at my wit's end on this.... public IList<CaseNoteTreeItem> GetTreeViewDataAsList(int personID) { var myContext = MatrixDataContext.Create(); var caseNotesTree = from cn in myContext.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate descending, cn.InsertDate descending select new CaseNoteTreeItem { CaseNoteID = cn.CaseNoteID, NoteContactDate = Convert.ToDateTime(cn.ContactDate). ToShortDateString(), ParentNoteID = cn.ParentNote, InsertUser = cn.InsertUser, ContactDetailsPreview = cn.ContactDetails.Substring(0, 75) }; return caseNotesTree.ToList<CaseNoteTreeItem>(); } AND THIS ONE public static DataTable GetAllCNotes(int personID) { using (var context = MatrixDataContext.Create()) { var caseNotes = from cn in context.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate select new { cn.ContactDate, cn.ContactDetails, cn.TimeSpentUnits, cn.IsCaseLog, cn.IsPreEnrollment, cn.PresentAtContact, cn.InsertDate, cn.InsertUser, cn.CaseNoteID, cn.ParentNote }; return caseNotes.ToList().CopyLinqToDataTable(); } }

    Read the article

  • Numpy/Python performing terribly vs. Matlab

    - by Nissl
    Novice programmer here. I'm writing a program that analyzes the relative spatial locations of points (cells). The program gets boundaries and cell type off an array with the x coordinate in column 1, y coordinate in column 2, and cell type in column 3. It then checks each cell for cell type and appropriate distance from the bounds. If it passes, it then calculates its distance from each other cell in the array and if the distance is within a specified analysis range it adds it to an output array at that distance. My cell marking program is in wxpython so I was hoping to develop this program in python as well and eventually stick it into the GUI. Unfortunately right now python takes ~20 seconds to run the core loop on my machine while MATLAB can do ~15 loops/second. Since I'm planning on doing 1000 loops (with a randomized comparison condition) on ~30 cases times several exploratory analysis types this is not a trivial difference. I tried running a profiler and array calls are 1/4 of the time, almost all of the rest is unspecified loop time. Here is the python code for the main loop: for basecell in range (0, cellnumber-1): if firstcelltype == np.array((cellrecord[basecell,2])): xloc=np.array((cellrecord[basecell,0])) yloc=np.array((cellrecord[basecell,1])) xedgedist=(xbound-xloc) yedgedist=(ybound-yloc) if xloc>excludedist and xedgedist>excludedist and yloc>excludedist and yedgedist>excludedist: for comparecell in range (0, cellnumber-1): if secondcelltype==np.array((cellrecord[comparecell,2])): xcomploc=np.array((cellrecord[comparecell,0])) ycomploc=np.array((cellrecord[comparecell,1])) dist=math.sqrt((xcomploc-xloc)**2+(ycomploc-yloc)**2) dist=round(dist) if dist>=1 and dist<=analysisdist: arraytarget=round(dist*analysisdist/intervalnumber) addone=np.array((spatialraw[arraytarget-1])) addone=addone+1 targetcell=arraytarget-1 np.put(spatialraw,[targetcell,targetcell],addone) Here is the matlab code for the main loop: for basecell = 1:cellnumber; if firstcelltype==cellrecord(basecell,3); xloc=cellrecord(basecell,1); yloc=cellrecord(basecell,2); xedgedist=(xbound-xloc); yedgedist=(ybound-yloc); if (xloc>excludedist) && (yloc>excludedist) && (xedgedist>excludedist) && (yedgedist>excludedist); for comparecell = 1:cellnumber; if secondcelltype==cellrecord(comparecell,3); xcomploc=cellrecord(comparecell,1); ycomploc=cellrecord(comparecell,2); dist=sqrt((xcomploc-xloc)^2+(ycomploc-yloc)^2); if (dist>=1) && (dist<=100.4999); arraytarget=round(dist*analysisdist/intervalnumber); spatialsum(1,arraytarget)=spatialsum(1,arraytarget)+1; end end end end end end Thanks!

    Read the article

  • How can I build pyv8 from source on FreeBSD against the v8 port?

    - by Utkonos
    I am unable to build pyv8 from source on FreeBSD. I have installed the /usr/ports/lang/v8 port, and I'm running into the following error. It seems that pyv8 wants to build v8 itself even though v8 is already built and installed. How can I point pyv8 to the already installed location of v8? # python setup.py build Found Google v8 base on V8_HOME , update it to the latest SVN trunk at running build ==================== INFO: Installing or updating GYP... -------------------- INFO: Check out GYP from SVN ... DEBUG: make dependencies ERROR: Check out GYP from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. ==================== INFO: Patching the GYP scripts INFO: patch the Google v8 build/standalone.gypi file to enable RTTI and C++ Exceptions ==================== INFO: building Google v8 with GYP for x64 platform with release mode -------------------- INFO: build v8 from SVN ... DEBUG: make verifyheap=off component=shared_library visibility=on gdbjit=off liveobjectlist=off regexp=native disassembler=off objectprint=off debuggersupport=on extrachecks=off snapshot=on werror=on x64.release ERROR: build v8 from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. The files that are installed by the v8 port are the following (in /usr/local): bin/d8 include/v8.h include/v8-debug.h include/v8-preparser.h include/v8-profiler.h include/v8-testing.h include/v8stdint.h lib/libv8.so lib/libv8.so.1

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • Psychonauts crashes right after entering load save door

    - by user67974
    Psychonauts crashes right after entering the 'Load Save' door. Here is the terminal output: Shader assembly time: 0.88 seconds Found OpenAL device: 'Simple Directmedia Layer' Found OpenAL device: 'ALSA Software' Found OpenAL device: 'OSS Software' Found OpenAL device: 'PulseAudio Software' Opened OpenAL Device: '(null)' ERROR: CAudioDrv::CAudioDrv->alGenSources reports AL_INVALID_VALUE error. PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfx.isb' to 'WorkResource/Sounds/commonfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonvoice.isb' to 'WorkResource/Sounds/commonvoice.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmusic.isb' to 'WorkResource/Sounds/commonmusic.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmentalfx.isb' to 'WorkResource/Sounds/commonmentalfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmenfxmem.isb' to 'WorkResource/Sounds/commonmenfxmem.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfxmem.isb' to 'WorkResource/Sounds/commonfxmem.isb' GameApp::StartUp InitSoundFiles() completed in 0.15 seconds GameApp::StartUp Load some common textures completed in 0.00 seconds WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb GameApp::StartUp InitEntities() completed in 0.02 seconds PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' GameApp::StartUp m_pSaveLoadInterface->Startup() completed in 0.00 seconds GameApp::StartUp m_UserInterface.Setup() completed in 0.00 seconds STUBBED: multisample at EDisplayOptionsWidget (/home/icculus/projects/psychonauts/Source/game/luatest/Game/UIPCDisplayOptions.cpp:97) STUBBED: VK_* at CheckVirtualKey (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:1443) Game: Engine Running hook startup Game: Engine -> SetupGlobalObjects Game: Engine -> SetupLevelMenu Game: Engine -> InitMath GameApp::StartUp InitLua2() completed in 0.00 seconds GameApp::StartUp SetupLevelMenu() completed in 0.00 seconds STUBBED: do we even use this? at InitSocket (/home/icculus/projects/psychonauts/Source/game/luatest/Game/Gameplaylogger.cpp:210) GameApp::StartUp Post-Install total completed in 0.20 seconds Start Up completed in 1.57 seconds UnixMain: StartUp successful.. Working directory: /opt/psychonauts STUBBED: dispatch SDL events at PCMainHandleAnyWindowsMessages (/home/icculus/projects/psychonauts/Source/game/luatest/UnixMain.cpp:56) STUBBED: write me at GetJoystickInput (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:428) STUBBED: write me at GetJoystickActionValue (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:613) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' Prerender subtitle file: workresource\cutScenes\prerendered\dflogo.dfs not found PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' STUBBED: fixed function pipeline? at setColorOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2097) STUBBED: fixed function pipeline? at setColorArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2106) STUBBED: fixed function pipeline? at setColorArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2115) STUBBED: fixed function pipeline? at setAlphaOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2124) STUBBED: fixed function pipeline? at setAlphaArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2133) STUBBED: fixed function pipeline? at setAlphaArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2142) STUBBED: fixed function pipeline? at setProjected (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2223) LOC WARN: Could not open Localization file 'Localization/English/_StringTable.lub' STUBBED: memory status at UpdateMemoryTracking (/home/icculus/projects/psychonauts/Source/game/luatest/Game/GameApp.cpp:4884) WARN: Couldn't resize array to 128; out-of-bounds elements are still in use: Vertex Pool, 188 Loading new level 'STMU' STUBBED: Need multithreaded GL at DisplayLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:83) ========================= Memory post unload level ========================= ========================= LOC WARN: Could not open Localization file 'Localization/English/ST_StringTable.lub' DaveD: Info: Texture pack file contains 137 textures Doing a texture readback for locking! Game: Engine Saved[GLOBAL]: InstaHintFord_HostileRecord = [table] Game: Engine Saved[GLOBAL]: InstaHintFord_HostileOrder = [table] WARN: Redundant packfile read: anims\thought_bubble\bubblefirestarting.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleintothemind.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleinvisibility.jan WARN: Redundant packfile read: anims\thought_bubble\bubblepopperfill.jan WARN: Redundant packfile read: anims\thought_bubble\bubbletelekinesis.jan Initializing level script (if there is one) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/stfx.isb' to 'WorkResource/Sounds/stfx.isb' Game: Engine Reloading goals: Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: bUsedSalts = 0 Game: Engine Saved[GLOBAL]: bSTEntered = 1 Game: Engine Saved[GLOBAL]: memoriesST = 1 Game: Engine Saved[GLOBAL]: PsiBallColor = 'red' Game: Engine Saved[ST]: lastSubLevel = 'STMU' Game: Engine LOADING LEVEL st.STMU Game: Engine Saved[CA]: CALevelState = 1 Game: Engine Cutscene progression: CS Script moving from state nil to state nil, resultant state nil. Time: 0.124746672809124. * Stack Trace 1: (null) (line -1, file '(none)) () 2: SpawnScript (line -1, file 'C) (global) 3: onBeginLevel (line -1, file '(none)) (field) 4: (null) (line -1, file '(none)) () WARN: Cannot call GetDirectoryListing when running from the DVD Game: Engine Raz spawning at DartStart startpoint VM : LevelScript could not find script 'doorrimlight1' * Stack Trace 1: (null) (line -1, file '(none)) () WARN: (none(-1) SetEntityAlpha LevelScript: NULL script object passed Game: Engine Saved[GLOBAL]: bLoadedFromMainMenu = 1 Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: NeedRankIncrement = 0 STUBBED: Need multithreaded GL at HideLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:110) WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb Game: Engine Saved[GLOBAL]: SplineFigmentTVSizex = 4.51434326171875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizey = 46.38104248046875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizez = 47.08810424804688 WARN: (none(-1) SetNewAction LevelScript: no string passed ====================== Asset load progression ====================== Initial: 2.518 MB Vertex, 8.688 MB Texture Level : 3.719 MB Vertex, 22.535 MB Texture Scripts: 3.747 MB Vertex, 22.848 MB Texture ====================== ====================== Memory post level load ====================== ====================== WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb DaveD: Level loaded in 0.14 seconds Anim: anims\objects\tk_arrow_idle.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdn.jan: loaded (1 frames latency) Anim: anims\thought_bubble\shieldloop.jan: loaded (1 frames latency) Anim: anims\dartnew\standready.jan: loaded (1 frames latency) Anim: anims\dartnew\walkmove.jan: loaded (1 frames latency) Anim: anims\janitor\hint_end.jan: loaded (1 frames latency) Anim: anims\thought_bubble\ballstatic.jan: loaded (1 frames latency) Anim: anims\dartnew\actionfall.jan: loaded (1 frames latency) Anim: anims\dartnew\standstill.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_lf_rt.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_up_dn.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdefpose.jan: loaded (1 frames latency) 1: 1 (number) 1: 1 (number) STUBBED: This is probably wrong at GetDt (/home/icculus/projects/psychonauts/Source/CommonLibs/DFUtil/Profiler.cpp:181) STUBBED: set specular highlights at setSpecularEnable (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Renderer.cpp:2035) Anim: anims\dartnew\trnrtcycle.jan: loaded (1 frames latency) Anim: anims\dartnew\run.jan: loaded (1 frames latency) Anim: anims\dartnew\walk.jan: loaded (1 frames latency) Anim: anims\thought_bubble\bubbledoublejump.jan: loaded (1 frames latency) Anim: anims\dartnew\longjump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door1closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\180.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door3closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslide45angle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslideflat.jan: loaded (1 frames latency) Anim: anims\dartnew\trnlfcycle.jan: loaded (1 frames latency) WARN: (none(-1) SetNewAction LevelScript: no string passed Anim: anims\dartnew\mainmenu_jump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1open.jan: loaded (1 frames latency) ERROR: Assert in /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Encountered Error: Psychonauts has encountered an error /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Please contact technical support at http://www.doublefine.com. I am currently using Bumblebee for hybrid graphics, if that helps in any way.

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

< Previous Page | 21 22 23 24 25 26 27  | Next Page >