Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 206/530 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • NedMalloc / DlMalloc experiences

    - by Suma
    I am currently evaluating a few of scalable memory allocators, namely nedmalloc and ptmalloc (both built on top of dlmalloc), as a replacement for default malloc / new because of significant contention seen in multithreaded environment. Their published performance seems to be good, however I would like to check what are experiences of other people who have really used them. Were your performance goals satisfied? Did you experience any unexpected or hard to solve issues (like heap corruption)? If you have tried both ptmaalloc and nedmalloc, which of the two would you recommend? Why (ease of use, performance)?

    Read the article

  • AnkhSVN Commits Are Very Slow

    - by jakdep
    Recently, I had to move my SVN repositories to a different server, but I am experiencing some performance problems since the move. I am using Visual Studio 2005, AnkhSVN 2.1.7819.411 and TortoiseSVN 1.6.6 on my workstation and VisualSVN Server on the server which runs Windows Server 2008. Whenever I try to commit a file or view the file history in Visual Studio it takes twenty odd seconds. I confirmed that an exception has been made for VisualSVN Server on the server's firewall, but when I disable the server's firewall the performance is back to normal (1-2 seconds for a commit). When I do a commit or check the log on a file in TortoiseSVN the performance is fine as well. To ensure that the problem was not related to the moving of the repositories, I am running these tests against a new repository which was created on the new server. So, I reckon the problem lies with AnkhSVN, but am at a loss as how to diagnose it further. Any help would be greatly appreciated.

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Storing varchar(max) & varbinary(max) together - Problem?

    - by Tony Basallo
    I have an app that will have entries of both varchar(max) and varbinary(max) data types. I was considering putting these both in a separate table, together, even if only one of the two will be used at any given time. The question is whether storing them together has any impact on performance. Considering that they are stored in the heap, I'm thinking that having them together will not be a problem. However, the varchar(max) column will be probably have the text in row table option set. I couldn't find any performance testing or profiling while "googling bing," probably too specific a question? The SQL Server 2008 table looks like this: Id ParentId Version VersionDate StringContent - varchar(max) BinaryContent - varbinary(max) The app will decide which of the two columns to select for when the data is queried. The string column will much used much more frequently than the binary column - will this have any impact on performance?

    Read the article

  • Why darcs instead of git?

    - by Ctrl Alt D-1337
    Using pure functional languages can have a lot of benefits over using impure imperatives but low level systems languages will generally allow you to achieve much greater performance especially when they are imperative because it allows you to specify the exact steps in how the cpu should compute the result. If there is ever list of tools where high performance is an absolute must then I would put source version controls systems right at the top of that list and git achieves this very well but performance is not it's only advantage over many other other types of version control systems anyway. The git team are handling the unsafe c code very well and I never worry about my type system or any other features of the language it is written in so why is it that there is a lot of haskell developers that must use darcs when they will only be using the finished product?

    Read the article

  • Touch screens for kiosk applications

    - by Micah
    I'm developing a kiosk-style touchscreen application in Qt. Currently I'm using an Elo Touch surface acoustic wave touchmonitor which works well except for one thing: drag performance is way too poor to provide a good user experience. As this is the case for the cursor in X as well as in my application, it seems to be either the fault of X (probably not) or the touchmonitor. Since mobile platforms are able to achieve very high performance in this regard, it seems like it should be possible for vastly more powerful desktop systems. Does anybody have experience with getting good drag performance out of desktop touchmonitors? What hardware have you used? Is X to blame?

    Read the article

  • FORMSOF Thesaurus in SQL Server

    - by Coolcoder
    Has anyone done any performance measures with this in terms of speed where there is a high number of substitutes for any given word. For instance, I want to use this to store common misspellings; expecting to have 4-10 variations of a word. <expansion> <sub>administration</sub> <sub>administraton</sub> <sub>aministraton</sub> </expansion> When you run a fulltext search, how does performance degrade with that number of variations? for instance, I assume it has to do a separate fulltext search performing an OR? Also, having say 20/30K entries in the Thesaurus xml file - does this impact performance?

    Read the article

  • Database caching on a shared host

    - by tau
    Anyone have any ideas how to increase MySQL performance on a shared host? My question has less to do with overall database performance and more to do with simply retrieving user-submitted data. Currently my database will create caches at timed intervals, and then the PHP will selectively access the static files it needs. This has given me a noticeable performance boost, but I am worried about a time in which I have so much data that having to read in big files in PHP will actually be slower. I am just looking for ideas for shared hosting solutions; I am not going to get my own server anytime soon. Thanks!

    Read the article

  • Sending SMTP e-mail at a high rate in .NET

    - by Martin Liversage
    I have a .NET service that processes a queue on a background thread and from the items in the queue sends out a large number of small e-mail messages at a very high rate (say 100 messages per second if that is even possible). Currently, I use SmtpClient.Send() but I'm afraid that it may hamper performance. Each call to Send() goes through a full cycle of opening the socket, performing the SMTP conversation (HELO, MAIL FROM, RCPT TO, DATA) and closing the socket. In pseudo code: for each message { open socket send HELO send MAIL FROM send RCPT TO send DATA close socket } I would think that the following pseudo code would be more optimal: open socket send HELO for each message { send MAIL FROM send RCPT TO send DATA } send QUIT close socket Should I be concerned about the performance of SmtpClient.Send() when sending e-mail at a high rate? What are my options for optimizing the performance?

    Read the article

  • Slowness of Netbeans Platform Apps - how to mitigate?

    - by user559298
    Hi, We are developing a commerical application (pretty complex) in java using Netbeans IDE. We have 2 options in netbeans to create it- 1. Develop Java desktop app 2. Netbeans Platform app We have requirement that application startup and response times should be very very fast, should be modular etc. We did Proof of Technology by creating apps using both approaches mentioned above. We found Netbeans platform apps are very slow during startup and during screen navigation compared to pure Swing based desktop apps. We tried to implement suggestions provided at http://wiki.netbeans.org/Category:Performance:FAQ and in other blogs and forums to improve on speed of the app but were not successful. We feel for creating a complex desktop app Netbeans platform app would be better suited, but its not meeting our performance requirements (startup and response times, memory footprints, CPU usage guidelines etc). Can any one guide us on how to mitigate our problem of improving performance of Netbeans Platforms apps? Thanks in advance for your help. -bhan

    Read the article

  • Succinct code over verbose?

    - by WeNeedAnswers
    With C# becoming more and more declarative and becoming the new Swiss army knife of Programming. Is it better to be succinct thus reducing the actual code base, or long winded but verbose. Is there a performance issue with succinct or does being succinct improve performance because your putting more of your code in the hands of the compiler. (LINQ being an example when used correctly). I know that verbosity should override succinct where code would become less readable, but is this a good idea when your style could affect the performance.

    Read the article

  • Curious: Could LLVM be used for Infocom z-machine code, and if so how? (in general)

    - by jonhendry2
    Forgive me if this is a silly question, but I'm wondering if/how LLVM could be used to obtain a higher performance Z-Machine VM for interactive fiction. (If it could be used, I'm just looking for some high-level ideas or suggestions, not a detailed solution.) It might seem odd to desire higher performance for a circa-1978 technology, but apparently Z-Machine games produced by the modern Inform 7 IDE can have performance issues due to the huge number of rules that need to be evaluated with each turn. Thanks! FYI: The Z-machine architecture was reverse-engineered by Graham Nelson and is documented at http://www.inform-fiction.org/zmachine/standards/z1point0/overview.html

    Read the article

  • Optimize included files and uses in Delphi

    - by Roland Bengtsson
    I try to increase performance of Delphi 2007 and Codeinsight. In the application there are 483 files added in the DPR file. I don't know if it is imagination but I feel that I got better performance from Codeinsight by simply readd all files in the DPR. I also think (correct me if I'm wrong) that all files that are included in a uses section also should be included in the DPR file for best performance. My question is, does it exists a tool that scan the whole project and give a list what files are missing in the DPR file and what files can be removed? Would also be nice to have a list of uses that can be removed in the PAS files. Regards

    Read the article

  • One big call vs. multiple smaller TSQL calls

    - by BrokeMyLegBiking
    I have a ADO.NET/TSQL performance question. We have two options in our application: 1) One big database call with multiple result sets, then in code step through each result set and populate my objects. This results in one round trip to the database. 2) Multiple small database calls. There is much more code reuse with Option 2 which is an advantage of that option. But I would like to get some input on what the performance cost is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say 10% performance loss? We are using C# 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

    Read the article

  • Int PK inner join Vs Guid PK inner Join on SQL Server. Execution plan.

    - by bigb
    I just did some testing for Int PK join Vs Guid PK. Tables structure and number of records looking like that: Performance of CRUD operations using EF4 are pretty similar in both cases. As we know Int PK has better performance rather than strings. So SQL server execution plan with INNER JOINS are pretty different Here is an execution plan. As i understand according with execution plan from attached image Int join has better performance because it is taking less resources for Clustered index scan and it is go in two ways, am i right? May be some one may explain this execution plan in more details?

    Read the article

  • Why does derivative trading position always require C++ knowledge?

    - by Jeffrey
    I’ve never worked in trading environment before and I was curious to see that few of the trading houses seem to use C# but most of them do heavily rely on C++. Why is it? Is it because C++ is better performance wise? Is it because of legacy code base? Is it because cross platform issue? What about dynamic languages (ruby, python)? Are they too slow for this kind of work in terms of performance? Updated: If realibility and performance are important would "Erlang" be the "next big thing" in trading platform?

    Read the article

  • Efficiency of the .NET garbage collector

    - by Jonas B
    OK here's the deal. There are some people who put their lives in the hands of .NET's garbage collector and some who simply wont trust it. I am one of those who partially trusts it, as long as it's not extremely performance critical (I know I know.. performance critical + .net not the favored combination), in which case I prefer to manually dispose of my objects and resources. What I am asking is if there are any facts as to how efficient or inefficient performance-wise the garbage collector really is? Please don't share any personal opinions or likely-assumptions-based-on-experience, I want unbiased facts. I also don't want any pro/con discussions because it won't answer the question. Thanks

    Read the article

  • Best and safest Java Profiler for production use?

    - by Pete
    I'm looking for a Java Profiler for use in a very high demand production environment, either commercial or free, that meets all of the following requirements: Lightweight integration with code (no recompile with special options, no code hooks, etc). Dropping some profiler specific .jars alongside the application code is ok. Should be able to connect/disconnect to the JVM without restarting the application. When profiling is not active, no impact to performance When profiling is active, negligible impact to performance. Very slight degradation is acceptable. Must do all the 'expected' stuff a profiler does - time spent in each method to find hotspots, object allocation/memory profiling, etc. Essentially I need something that can sit dormant in production when everything is fine without anyone knowing or caring that it is there, but then be able to connect to it hassle (and performance degradation) free to pinpoint the hard to find problems like hotspots and synchronization issues.

    Read the article

  • Is there a faster way to draw text?

    - by mystify
    Shark complains about a big performance hit with this line, which takes like 80% of CPU time. I have a counter that is updated very frequently and performance seriously sucks. It's an custom UILabel subclass with -drawRect: implemented. Every time the counter value changes, this is used to draw the new text: [self.text drawInRect:textRect withFont:correctedFont lineBreakMode:self.lineBreakMode alignment:self.textAlignment]; When I comment this line out, performance rocks. Its smooth and fast. So Shark isn't wrong about this. But what could I do to improve this? Maybe go a level deeper? Does that make any sense? Probably drawing text is really so incredible heavy...?

    Read the article

  • Namespacing technique in JavaScript, recommended? performant? issues to be aware of?

    - by Bjartr
    In a project I am working on I am structuring my code as follows MyLib = { AField:0, ASubNamespace:{ AnotherField:"value", AClass:function(param) { this.classField = param; this.classFunction = function(){ // stuff } } }, AnotherClass:function(param) { this.classField = param; this.classFunction = function(){ // stuff } } } and so on like that to do stuff like: var anInstance = new MyLib.ASubNamespace.AClass("A parameter."); Is this the right way to go about achieving namespacing? Are there performance hits, and if so, how drastic? Do performance degradations stack as I nest deeper? Are there any other issues I should be aware of when using this structure? I care about every little bit of performance because it's a library for realtime graphics, so I'm taking any overhead very seriously.

    Read the article

  • Django Project Done and Working. Now What?

    - by Rodrogo
    Hi, I just finished what I would call a small django project and pretty soon it's going live. It's only 6 models but a fairly complex view layer and a lot of records saving and retrieving. Of course, forgetting the obvious huge amount of bugs that will, probably, fill my inbox to the top, what would it be the next step towards a website with best performance. What could be tweaked? I'm using jmeter a lot recently and feel confident that I have a good baseline for future performance comparisons, but the thing is: I'm not sure what is the best start, since I'm a greedy bastard that wants to work the least possible and gather the best results. For instance, should I try an approach towards infrastructure, like a distributed database, or should I go with the code itself and in that case, is there something that specifically results in better performance? In your experience, whats pays off more? Personal anecdotes are welcome, but some fact based opinions are even more. :) Thanks very much.

    Read the article

  • Share Files and Folders and Internet between Guest OS and the Host in Hyper-V

    - by Manesh Karunakaran
    For those who are familiar with the VirtualPC, vmWare and VirtualBox environments will be quite irritated to find out that there is no direct way to share files from the Host machine to the Virtualized guest environment. This is a good thing from a CIO perspective because there’s excellent isolation for the virtualized environments this way, but for the developer junkies like us, this is an irritant, especially for those who have nuked their Windows 7 OS and installed Windows Server 2008 R2 for all the the SharePoint friendliness that it offers. Here’s a quick 5 minutes howto on Enabling Shared Folders and Internet Access for the Hyper-V images, for those who are still struggling with this. Step 1: Add a Virtual Network Adapter to your Guest OS For this, shut down the guest machine, go to its settings and add a Virtual Network Adapter as given in the images below     Step 2: Enable Virtual Networking in Hyper-V   Setting this up is very easy. In the Hyper-V Manager, under Actions (right panel), click the Virtual Network Manager. In the Virtual Network Manager in the Create virtual network panel, select Internal and click the Add button.        At this point if you open Control Panel\Network and Internet\Network Connections you will be able to see the new Network Adapter, Now name it to something meaningful other than Network Adapter X. Now you can add this network to each of your virtual machines, but at this point, unless you assign an IP address in each connection, you won't be able to do much.   Step 3: Enable Internet Connection Sharing so that Guest OS’es also can connect to the internet. To enable ICS follow these steps: Click on the network icon in the tray of your host machine and select Network and Sharing Center. From there click Manage network connections. Select the network adapter that you use to access the Internet. Right click it and select Properties. In the properties dialog select the Sharing tab. On this tab check the box that says "Allow other network users..." and then set the Home networking connection to be the network adapter that was created above (now you see why I said to rename it to something useful). Now your virtual machines that have this network connection will automatically get an IP address and will be able to connect to the Internet (provided your internet connection is working). Because each adapter also gets an automatic address you can now share files and folders between your host and your virtual machines which is important since you can't just drag-and-drop files like you can with Virtual PC.   Step 4: Create a Shared Folder in the Host Machine and use it in the Guest machine. Right click on the folder that you want to Share and select ‘Share with\Specific People’ and specify who all can access the share. Open the Guest OS from Hyper V Navigate to Start > Run and type in the Address of the Share (Or Map a Drive to the Share) Bingo! The Share opens!! :)   Now you can share as many files and folders as you want between the host and the guest, and you also have internet access inside the Virtual machines. Hope that helps.   Technorati Tags: Shared folder,Hyper-V,Share Files,Share files and folders between guest and host,Hyper-V Networking,Share Internet Access in Hyper-V,Internet,Files,Shared folders in Hyper-V

    Read the article

  • Get ListView Visible items

    - by Vincent Piel
    I have a ListView which might contains a lot of items, so it is virtualized and recycling items. It does not use sort. I need to refresh some value display, but when there are too many items, it is too slow to update everything, so I would like to refresh only the visible items. How could I get a list of all currently displayed items ? I tried to look into the ListView or in the ScrollViewer, but I still have no idea how to achieve this. The solution must NOT go through all items to test if they can be seen, because this would be too slow. I'm not sure code or xaml would be usefull, it is just a Virtualized/Recycling ListView with its ItemSource bound to an Array. Edit Answer : thanks to akjoshi, i found the way : get the ScrollViewer of the ListView (with a FindDescendant method, that you can do yourself with the VisualTreeHelper ). Then use its ScrollViewer.VerticalOffset : it is the number of the first item shown and ScrollViewer.ViewportHeight : it is the count of items shown. Rq : CanContentScroll must be true.

    Read the article

  • More than 100,000 articles !

    - by developerit
    In one month, we already got more than 100,000, and we continue to crawl! We plan on hitting 250,000 total articles next month. Due to the large amount of data we are gathering, we are planning on updating our SQL stored procedure to improve performance. We may be migrating to SQL Server 2008 Entreprise, as we are currently running on SQL Server 2005 Express Edition… We are at 400 Mb of data, getting more and more close to the 2 Gb limit. Stay tune for more info and browse daily fresh articles about web development.

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >