Search Results

Search found 666 results on 27 pages for 'profiler'.

Page 22/27 | < Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Optimize CUDA with Thrust in a loop

    - by macs
    Given the following piece of code, generating a kind of code dictionary with CUDA using thrust (C++ template library for CUDA): thrust::device_vector<float> dCodes(codes->begin(), codes->end()); thrust::device_vector<int> dCounts(counts->begin(), counts->end()); thrust::device_vector<int> newCounts(counts->size()); for (int i = 0; i < dCodes.size(); i++) { float code = dCodes[i]; int count = thrust::count(dCodes.begin(), dCodes.end(), code); newCounts[i] = dCounts[i] + count; //Had we already a count in one of the last runs? if (dCounts[i] > 0) { newCounts[i]--; } //Remove thrust::detail::normal_iterator<thrust::device_ptr<float> > newEnd = thrust::remove(dCodes.begin()+i+1, dCodes.end(), code); int dist = thrust::distance(dCodes.begin(), newEnd); dCodes.resize(dist); newCounts.resize(dist); } codes->resize(dCodes.size()); counts->resize(newCounts.size()); thrust::copy(dCodes.begin(), dCodes.end(), codes->begin()); thrust::copy(newCounts.begin(), newCounts.end(), counts->begin()); The problem is, that i've noticed multiple copies of 4 bytes, by using CUDA visual profiler. IMO this is generated by The loop counter i float code, int count and dist Every access to i and the variables noted above This seems to slow down everything (sequential copying of 4 bytes is no fun...). So, how i'm telling thrust, that these variables shall be handled on the device? Or are they already? Using thrust::device_ptr seems not sufficient for me, because i'm not sure whether the for loop around runs on host or on device (which could also be another reason for the slowliness).

    Read the article

  • How to speed up the reading of innerHTML in IE8?

    - by Dennis Cheung
    I am using JQuery with the DataTable plugin, and now I have a big performnce issue on the following line. aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 I have a ajax call, and result string in HTML format. I convert them into HTML nodes, and that part is ok. var $result = $('<div/>').html(result).find("*:first"); // simlar to $result=$(result) but much more faster in Fx Then I activate enable the result from a plain table to a sortable datatable. The speed is acceptable in Fx (around 4sec for 900 rows), but unacceptable in IE8 (more then 100 seconds). I check it deep using the buildin profiler, and found the above single line take all 99.9% of the time, how can I speed it up? anything I missed? nTrs = oSettings.nTable.getElementsByTagName('tbody')[0].childNodes; for ( i=0, iLen=nTrs.length ; i<iLen ; i++ ) { if ( nTrs[i].nodeName == "TR" ) { iThisIndex = oSettings.aoData.length; oSettings.aoData.push( { "nTr": nTrs[i], "_iId": oSettings.iNextId++, "_aData": [], "_anHidden": [], "_sRowStripe": '' } ); oSettings.aiDisplayMaster.push( iThisIndex ); aLocalData = oSettings.aoData[iThisIndex]._aData; nTds = nTrs[i].childNodes; jInner = 0; for ( j=0, jLen=nTds.length ; j<jLen ; j++ ) { if ( nTds[j].nodeName == "TD" ) { aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 jInner++; } } } }

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Getting identity from Ado.Net Update command

    - by rboarman
    My scenario is simple. I am trying to persist a DataSet and have the identity column filled in so I can add child records. Here's what I've got so far: using (SqlConnection connection = new SqlConnection(connStr)) { SqlDataAdapter adapter = new SqlDataAdapter("select * from assets where 0 = 1", connection); adapter.MissingMappingAction = MissingMappingAction.Passthrough; adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey; SqlCommandBuilder cb = new SqlCommandBuilder(adapter); var insertCmd = cb.GetInsertCommand(true); insertCmd.Connection = connection; connection.Open(); adapter.InsertCommand = insertCmd; adapter.InsertCommand.CommandText += "; set ? = SCOPE_IDENTITY()"; adapter.InsertCommand.UpdatedRowSource = UpdateRowSource.OutputParameters; var param = new SqlParameter("RowId", SqlDbType.Int); param.SourceColumn = "RowId"; param.Direction = ParameterDirection.Output; adapter.InsertCommand.Parameters.Add(param); SqlTransaction transaction = connection.BeginTransaction(); insertCmd.Transaction = transaction; try { assetsImported = adapter.Update(dataSet.Tables["Assets"]); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); // Log an error } connection.Close(); } The first thing that I noticed, besides the fact that the identity value is not making its way back into the DataSet, is that my change to add the scope_identity select statement to the insert command is not being executed. Looking at the query using Profiler, I do not see my addition to the insert command. Questions: 1) Why is my addition to the insert command not making its way to the sql being executed on the database? 2) Is there a simpler way to have my DataSet refreshed with the identity values of the inserted rows? 3) Should I use the OnRowUpdated callback to add my child records? My plan was to loop through the rows after the Update() call and add children as needed. Thank you in advance. Rick

    Read the article

  • How does loop address alignment affect the speed on Intel x86_64?

    - by Alexander Gololobov
    I'm seeing 15% performance degradation of the same C++ code compiled to exactly same machine instructions but located on differently aligned addresses. When my tiny main loop starts at 0x415220 it's faster then when it is at 0x415250. I'm running this on Intel Core2 Duo. I use gcc 4.4.5 on x86_64 Ubuntu. Can anybody explain the cause of slowdown and how I can force gcc to optimally align the loop? Here is the disassembly for both cases with profiler annotation: 415220 576 12.56% |XXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415224 110 2.40% |XX 0f b6 c3 movzbl %bl,%eax 415227 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41522c 40 0.87% | 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415230 806 17.58% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415233 186 4.06% |XXXX 48 c1 e8 20 shr $0x20,%rax 415237 102 2.22% |XX 4c 01 f9 add %r15,%rcx 41523a 414 9.03% |XXXXXXXXXX a8 0f test $0xf,%al 41523c 680 14.83% |XXXXXXXXXXXXXXXX 74 45 je 415283 ::Run(char const*, char const*)+0x4b3 41523e 0.00% | 41 89 c7 mov %eax,%r15d 415241 0.00% | 41 83 e7 01 and $0x1,%r15d 415245 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415249 0.00% | 41 89 c7 mov %eax,%r15d 415250 679 13.05% |XXXXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415254 124 2.38% |XX 0f b6 c3 movzbl %bl,%eax 415257 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41525c 43 0.83% |X 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415260 828 15.91% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415263 388 7.46% |XXXXXXXXX 48 c1 e8 20 shr $0x20,%rax 415267 141 2.71% |XXX 4c 01 f9 add %r15,%rcx 41526a 634 12.18% |XXXXXXXXXXXXXXX a8 0f test $0xf,%al 41526c 749 14.39% |XXXXXXXXXXXXXXXXXX 74 45 je 4152b3 ::Run(char const*, char const*)+0x4c3 41526e 0.00% | 41 89 c7 mov %eax,%r15d 415271 0.00% | 41 83 e7 01 and $0x1,%r15d 415275 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415279 0.00% | 41 89 c7 mov %eax,%r15d

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • PHP: Profiling code and strict environment ~ Improving my coding

    - by DavidYell
    I would like to update my local working environment to be stricter in an effort to improve my code. I know that my code is okay, but as with most things there is always room for improvement. I use XAMPP on my local machine, for simplicities sake Apache Friends XAMPP (Basic Package) version 1.7.2 So I've updated my php.ini : error_reporting to be E_ALL | E_STRICT to help with the code standard. I've also enabled the XDebug extension zend_extension = "C:\xampp\php\ext\php_xdebug.dll" which seems to be working, having tested some broken code and got the nice standard orange error notice. However, having read this question, http://stackoverflow.com/questions/133686/what-is-the-best-way-to-profile-php-code and enabled the profiler, I cannot seem to create a cachegrind file. Many of the guides that I've looked at seem to think you need to install XDebug in XAMPP which leads me to think they are out of date, as XDebug is bundled with XAMPP these days. So I would appreciate it if anyone can help point me in the right direction with both configuring XDebug to output grind files, and or just a great set of default settings for the XDebug config in XAMPP. Seems there is very little documentation to go on. If people have tips on integrating these tools with Netbeans, that would be awesomesauce. I'm happy to get suggestions on other things that I can do to help tighten up my php code, both syntactically and performance wise Thanks, and apologies for the rambling question(s)! Ninja edit I should menion that I'm using named vhosts as my Apache configuration, which I think is why running XDebug on port 9000 isn't working for me. I guess I'd need to edit my vhost to include port 9000

    Read the article

  • NHibernate slow mapping

    - by Rob A
    My question is what can I do to determine the cause of the slowness, or what can I do to speed it up without knowing the exact cause. I am running a simple query and it appears that the mapping back to the entities is taking taking forever. The result set is 350, which is not much data in my opinion. IRepository repo = ObjectFactory.GetInstance<IRepository>(); var q = repo.Query<Order>(item => item.Ordereddate > DateTime.Now.AddDays(-40)); foreach (var order in q) { Console.WriteLine(order.TransactionNumber); } The profiler is telling me it is executing the query 7ms / 35257ms, I am assuming that the former is the actual response from the db and the latter is the time it takes NH to do it's magic. 35 seconds is too long. This is a simple mapping, one table, nested components, using fluent interface to do mappings. I just start up a simple console app and run the one query, the slowness is measured after the SessionFactory is initialized, there should only be one session, and I am not using a transaction. Thanks

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • How can you get the call tree with python profilers?

    - by Oliver
    I used to use a nice Apple profiler that is built into the System Monitor application. As long as your C++ code was compiled with debug information, you could sample your running application and it would print out an indented tree telling you what percent of the parent function's time was spent in this function (and the body vs. other function calls). For instance, if main called function_1 and function_2, function_2 calls function_3, and then main calls function_3: main (100%, 1% in function body): function_1 (9%, 9% in function body): function_2 (90%, 85% in function body): function_3 (100%, 100% in function body) function_3 (1%, 1% in function body) I would see this and think, "Something is taking a long time in the code in the body of function_2. If I want my program to be faster, that's where I should start." Does anyone know how I can most easily get this exact profiling output for a python program? I've seen people say to do this: import cProfile, pstats prof = cProfile.Profile() prof = prof.runctx("real_main(argv)", globals(), locals()) stats = pstats.Stats(prof) stats.sort_stats("time") # Or cumulative stats.print_stats(80) # 80 = how many to print but it's quite messy compared to that elegant call tree. Please let me know if you can easily do this, it would help quite a bit. Cheers!

    Read the article

  • Silverlight performance with many loaded controls

    - by gius
    I have a SL application with many DataGrids (from Silverlight Toolkit), each on its own view. If several DataGrids are opened, changing between views (TabItems, for example) takes a long time (few seconds) and it freezes the whole application (UI thread). The more DataGrids are loaded, the longer the change takes. These DataGrids that slow the UI chanage might be on other places in the app and not even visible at that moment. But once they are opened (and loaded with data), they slow showing other DataGrids. Note that DataGrids are NOT disposed and then recreated again, they still remain in memory, only their parent control is being hidden and visible again. I have profiled the application. It shows that agcore.dll's SetValue function is the bottleneck. Unfortunately, debug symbols are not available for this Silverlight native library responsible for drawing. The problem is not in the DataGrid control - I tried to replace it with XCeed's grid and the performance when changing views is even worse. Do you have any idea how to solve this problem? Why more opened controls slow down other controls? I have created a sample that shows this issue: http://cenud.cz/PerfTest.zip UPDATE: Using VS11 profiler on the sample provided suggests that the problem could be in MeasureOverride being called many times (for each DataGridCell, I guess). But still, why is it slower as more controls are loaded elsewhere? Is there a way to improve the performance?

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Refactoring - Speed increase

    - by Michael G
    How can I make this function more efficient. It's currently running at 6 - 45 seconds. I've ran dotTrace profiler on this specific method, and it's total time is anywhere between 6,000ms to 45,000ms. The majority of the time is spent on the "MoveNext" and "GetEnumerator" calls. and example of the times are 71.55% CreateTableFromReportDataColumns - 18, 533* ms - 190 calls -- 55.71% MoveNext - 14,422ms - 10,775 calls What can I do to speed this method up? it gets called a lot, and the seconds add up: private static DataTable CreateTableFromReportDataColumns(Report report) { DataTable table = new DataTable(); HashSet<String> colsToAdd = new HashSet<String> { "DataStream" }; foreach (ReportData reportData in report.ReportDatas) { IEnumerable<string> cols = reportData.ReportDataColumns.Where(c => !String.IsNullOrEmpty(c.Name)).Select(x => x.Name).Distinct(); foreach (var s in cols) { if (!String.IsNullOrEmpty(s)) colsToAdd.Add(s); } } foreach (string col in colsToAdd) { table.Columns.Add(col); } return table; }

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • 'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

    - by Simon_Weaver
    I have a LINQ to SQL query that generates the following SQL : exec sp_executesql N'SELECT COUNT(*) AS [value] FROM [dbo].[SessionVisit] AS [t0] WHERE ([t0].[VisitedStore] = @p0) AND (NOT ([t0].[Bot] = 1)) AND ([t0].[SessionDate] > @p1)',N'@p0 int,@p1 datetime', @p0=1,@p1='2010-02-15 01:24:00' (This is the actual SQL taken from SQL Profiler on SQL Server 2008.) The query plan generated when I run this SQL from within Query Analyser is perfect. It uses an index containing VisitedStore, Bot, SessionDate. The query returns instantly. However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning. What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different. I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems. Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • SQL putting two single quotes around datetime fields and fails to insert record

    - by user82613
    I am trying to INSERT into an SQL database table, but it doesn't work. So I used the SQL server profiler to see how it was building the query; what it shows is the following: declare @p1 int set @p1=0 declare @p2 int set @p2=0 declare @p3 int set @p3=1 exec InsertProcedureName @ConsumerMovingDetailID=@p1 output, @UniqueID=@p2 output, @ServiceID=@p3 output, @ProjectID=N'0', @IPAddress=N'66.229.112.168', @FirstName=N'Mike', @LastName=N'P', @Email=N'[email protected]', @PhoneNumber=N'(254)637-1256', @MobilePhone=NULL, @CurrentAddress=N'', @FromZip=N'10005', @MoveInAddress=N'', @ToZip=N'33067', @MovingSize=N'1', @MovingDate=''2009-04-30 00:00:00:000'', /* Problem here ^^^ */ @IsMovingVehicle=0, @IsPackingRequired=0, @IncludeInSaveologyPlanner=1 select @p1, @p2, @p3 As you can see, it puts a double quote two pairs of single quotes around the datetime fields, so that it produces a syntax error in SQL. I wonder if there is anything I must configure somewhere? Any help would be appreciated. Here is the environment details: Visual Studio 2008 .NET 3.5 MS SQL Server 2005 Here is the .NET code I'm using.... //call procedure for results strStoredProcedureName = "usp_SMMoverSearchResult_SELECT"; Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand(strStoredProcedureName); dbCommand.CommandTimeout = DataHelper.CONNECTION_TIMEOUT; database.AddInParameter(dbCommand, "@MovingDetailID", DbType.String, objPropConsumer.ConsumerMovingDetailID); database.AddInParameter(dbCommand, "@FromZip", DbType.String, objPropConsumer.FromZipCode); database.AddInParameter(dbCommand, "@ToZip", DbType.String, objPropConsumer.ToZipCode); database.AddInParameter(dbCommand, "@MovingDate", DbType.DateTime, objPropConsumer.MoveDate); database.AddInParameter(dbCommand, "@PLServiceID", DbType.Int32, objPropConsumer.ServiceID); database.AddInParameter(dbCommand, "@FromAreaCode", DbType.String, pFromAreaCode); database.AddInParameter(dbCommand, "@FromState", DbType.String, pFromState); database.AddInParameter(dbCommand, "@ToAreaCode", DbType.String, pToAreaCode); database.AddInParameter(dbCommand, "@ToState", DbType.String, pToState); DataSet dstSearchResult = new DataSet("MoverSearchResult"); database.LoadDataSet(dbCommand, dstSearchResult, new string[] { "MoverSearchResult" });

    Read the article

  • CREATE VIEW called multiple times not creating all views

    - by theninepoundhammer
    Noticing strange behavior in SQL 2005, both Express and Enterprise Edition: In my code I need to loop through a series of values (about five in a row), and for each value, I need to insert the value into a table and dynamically create a new view using that value as part of the where clause and the name of the view. The code runs pretty quickly, but what I'm noticing is that all the values are inserted into the table correctly but only the LAST view is being created. Every time. For example, if the values I'm using are X1, X2, X3, X4, and X5, I'll run the process, open up Mgmt Studio, and see five rows in the table with the correct five values, but only one view named MyView_x5 that has the correct WHERE clause. At first, I had this loop in an SSIS package as part of a larger data flow. When I started noticing this behavior, I created a stored proc that would create the CREATE VIEW statement dynamically after the insert and called EXECUTE to create the view. Same result. Finally, I created some C# code using the Enterprise Library DAAB, and did the insert and CREATE VIEW statements from my DLL. Same result every time. Most recently, I turned on Profiler while running against the Enterprise Edition and was able to verify that the Batch Started and Batch Completed events were being fired off for each instance of the view. However, like I said, only the last view is actually being created. Does anyone have any idea why this might be happening? Or any suggestions about what else to check or profile? I've profiled for error messages, exceptions, etc. but don't see any in my trace file. My express edition is 9.00.1399.06. Not sure about the Enterprise edition but think it is SP2.

    Read the article

  • Optimization of Function with Dictionary and Zip()

    - by eWizardII
    Hello, I have the following function: def filetxt(): word_freq = {} lvl1 = [] lvl2 = [] total_t = 0 users = 0 text = [] for l in range(0,500): # Open File if os.path.exists("C:/Twitter/json/user_" + str(l) + ".json") == True: with open("C:/Twitter/json/user_" + str(l) + ".json", "r") as f: text_f = json.load(f) users = users + 1 for i in range(len(text_f)): text.append(text_f[str(i)]['text']) total_t = total_t + 1 else: pass # Filter occ = 0 import string for i in range(len(text)): s = text[i] # Sample string a = re.findall(r'(RT)',s) b = re.findall(r'(@)',s) occ = len(a) + len(b) + occ s = s.encode('utf-8') out = s.translate(string.maketrans("",""), string.punctuation) # Create Wordlist/Dictionary word_list = text[i].lower().split(None) for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 keys = word_freq.keys() numbo = range(1,len(keys)+1) WList = ', '.join(keys) NList = str(numbo).strip('[]') WList = WList.split(", ") NList = NList.split(", ") W2N = dict(zip(WList, NList)) for k in range (0,len(word_list)): word_list[k] = W2N[word_list[k]] for i in range (0,len(word_list)-1): lvl1.append(word_list[i]) lvl2.append(word_list[i+1]) I have used the profiler to find that it seems the greatest CPU time is spent on the zip() function and the join and split parts of the code, I'm looking to see if there is any way I have overlooked that I could potentially clean up the code to make it more optimized, since the greatest lag seems to be in how I am working with the dictionaries and the zip() function. Any help would be appreciated thanks!

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • Nhibernate Fluent domain Object with Id(x => x.id).GeneratedBy.Assigned not saveable

    - by urpcor
    Hi there, I am using for some legacy db the corresponding domainclasses with mappings. Now the Ids of the entities are calculated by some stored Procedure in the DB which gives back the Id for the new row.(Its legacy, I cant change this) Now I create the new entity , set the Id and Call Save. But nothing happens. no exeption. Even NH Profiler does not say a bit. its as the Save call does nothing. I expect that NH thinks that the record is already in the db because its got an Id already. But I am using Id(x = x.id).GeneratedBy.Assigned() and intetionally the Session.Save(object) method. I am confused. I saw so many samples there it worked. does any body have any ideas about it? public class Appendix { public virtual int id { get; set; } public virtual AppendixHierarchy AppendixHierachy { get; set; } public virtual byte[] appendix { get; set; } } public class AppendixMap : ClassMap<Appendix> { public AppendixMap () { WithTable("appendix"); Id(x => x.id).GeneratedBy.Assigned(); References(x => x.AppendixHierachy).ColumnName("appendixHierarchyId"); Map(x => x.appendix); } }

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >