Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 367/837 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • ASMX Still slow after 'Generate serialization assembly'

    - by Buzzer
    This question is related to: http://stackoverflow.com/questions/784918/asmx-web-service-slow-first-request. I inherited a proxy to a legacy ASMX Service. Basically as the post above states, the first call performance is literally 10 times slower than the subsequent calls. I went ahead and turned on ‘Generate serialization assembly' on the project that contains the proxy. The 'serializers' assembly is actually generated. However, I haven't seen any performance increase at all. Do I need to do anything else other than make sure the 'serializers' assembly is in the client's bin directory? Do I have to 'link' the proxy to the 'serializers' assembly during proxy generation (wsdl.exe)? I guess I'm stuck at this point. J Saunders where u at? :)

    Read the article

  • What are some best practises and "rules of thumb" for creating database indexes?

    - by Ash
    I have an app, which cycles through a huge number of records in a database table and performs a number of SQL and .Net operations on records within that database (currently I am using Castle.ActiveRecord on PostgreSQL). I added some basic btree indexes on a couple of the feilds, and as you would expect, the peformance of the SQL operations increased substantially. Wanting to make the most of dbms performance I want to make some better educated choices about what I should index on all my projects. I understand that there is a detrement to performance when doing inserts (as the database needs to update the index, as well as the data), but what suggestions and best practices should I consider with creating database indexes? How do I best select the feilds/combination of fields for a set of database indexes (rules of thumb)? Also, how do I best select which index to use as a clustered index? And when it comes to the access method, under what conditions should I use a btree over a hash or a gist or a gin (what are they anyway?).

    Read the article

  • Avoid implicit conversion from date to timestamp for selects with Oracle using Hibernate

    - by sapporo
    I'm using Hibernate 3.2.7.GA criteria queries to select rows from an Oracle Enterprise Edition 10.2.0.4.0 database, filtering by a timestamp field. The field in question is of type java.util.Date in Java, and DATE in Oracle. It turns out that the field gets mapped to java.sql.Timestamp, and Oracle converts all rows to TIMESTAMP before comparing to the passed in value, bypassing the index and thereby ruining performance. One solution would be to use Hibernate's sqlRestriction() along with Oracle's TO_DATE function. That would fix performance, but requires rewriting the application code (lots of queries). So is there a more elegant solution? Since Hibernate already does type mapping, could it be configured to do the right thing? Update: The problem occurs in a variety of configurations, but here's one specific example: Oracle Enterprise Edition 10.2.0.4.0 Oracle JDBC Driver 11.1.0.7.0 Hibernate 3.2.7.GA Hibernate's Oracle10gDialect Java 1.6.0_16

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • How to set that compiler flag?

    - by mystify
    Shark told me this: This instruction is the start of a loop that is not aligned to a 16-byte address boundary. For optimal performance, you should align the start of a hot loop using a compiler directive. With gcc 3.3 or later, use the -falign-loops=16 compiler flag. for (int i=0; i < 4; i++) { // line with the info //...code } How would I set that flag, and does it really improve performance?

    Read the article

  • IIS 7.0 - Every site suddenly redirecting root request to forms authentication

    - by Pittsburgh DBA
    Suddenly, IIS 7.0 is redirecting every request for the root of any domain hosted on the box to ~/Account/Logon, which is our Forms Authentication redirect. Additionally, some JavaScript and image requests are being similarly redirected, but not other aspx pages. This is not desirable. Nobody will admit to changing anything. Any ideas? EDIT: It turns out that something has gone wrong with the disk permissions. Can anyone point me to the way things are supposed to be in Windows Server 2008 for a standard ASP.Net installation? The disk permissions are out of whack now.

    Read the article

  • Managing EntityConnection lifetime

    - by kervin
    There have been many question on managing EntityContext lifetime, e.g. http://stackoverflow.com/questions/813457/instantiating-a-context-in-linq-to-entities I've come to the conclusion that the entity context should be considered a unit-of-work and therefore not reused. Great. But while doing some research for speeding up my database access, I ran into this blog post... Improving Entity Framework Performance The post argues that EFs poor performance compared to other frameworks is often due to the EntityConnection object being created each time a new EntityContext object is needed. To test this I manually created a static EntityConnection in Global.asax.cs Application_Start(). I then converted all my context using statements to using( MyObjContext currContext = new MyObjeContext(globalStaticEFConnection) { .... } This seems to have sped things up a bit without any errors so far as far as I can tell. But is this safe? Does using a applicationwide static EntityConnection introduce race conditions? Best regards, Kervin

    Read the article

  • Map the physical file path in asp.net mvc

    - by rmassart
    Hi, I am trying to read an XSLT file from disk in my ASP.Net MVC controller. What I am doing is the following: string filepath = HttpContext.Request.PhysicalApplicationPath; filepath += "/Content/Xsl/pubmed.xslt"; string xsl = System.IO.File.ReadAllText(filepath); However, half way down this thread on forums.asp.net there is the following quote HttpContext.Current is evil and if you use it anywhere in your mvc app you are doing something wrong because you do not need it. Whilst I am not using "Current", I am wondering what is the best way to determine the absolute physical path of a file in MVC? For some reason (I don't know why!) HttpContext doesn't feel right for me. Is there a better (or recommended/best practice) way of reading files from disk in ASP.Net MVC? Thanks for your help, Robin

    Read the article

  • cannot access new drive through nfs

    - by l.thee.a
    I am running nfs-kernel-server to access my files on my linux machine(ubuntu - /share). The disk I have been using is full. So I have added a new disk and mounted it to /share/data. My other pc mounts the /share folder to /mnt/nfs; but cannot see the contents of /mnt/nfs/data. I have tried adding /share/data to /etc/exports, but it did not help. What do I do?

    Read the article

  • Virtual Function Implementation

    - by Gokul
    Hi, I have kept hearing this statement. Switch..Case is Evil for code maintenance, but it provides better performance(since compiler can inline stuffs etc..). Virtual functions are very good for code maintenance, but they incur a performance penalty of two pointer indirections. Say i have a base class with 2 subclasses(X and Y) and one virtual function, so there will be two virtual tables. The object has a pointer, based on which it will choose a virtual table. So for the compiler, it is more like switch( object's function ptr ) { case 0x....: X->call(); break; case 0x....: Y->call(); }; So why should virtual function cost more, if it can get implemented this way, as the compiler can do the same in-lining and other stuff here. Or explain me, why is it decided not to implement the virtual function execution in this way? Thanks, Gokul.

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • OpenCV to use in memory buffers or file pointers

    - by The Unknown
    The two functions in openCV cvLoadImage and cvSaveImage accept file path's as arguments. For example, when saving a image it's cvSaveImage("/tmp/output.jpg", dstIpl) and it writes on the disk. Is there any way to feed this a buffer already in memory? So instead of a disk write, the output image will be in memory. I would also like to know this for both cvSaveImage and cvLoadImage (read and write to memory buffers). Thanks! My goal is to store the Encoded (jpeg) version of the file in Memory. Same goes to cvLoadImage, I want to load a jpeg that's in memory in to the IplImage format.

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • How to figure the read/write ratio in Sql Server?

    - by Bill Paetzke
    How can I query the read/write ratio in Sql Server 2005? Are there any caveats I should be aware of? Perhaps it can be found in a DMV query, a standard report, a custom report (i.e the Performance Dashboard), or examining a Sql Profiler trace. I'm not sure exactly. Why do I care? I'm taking time to improve the performance of my web app's data layer. It deals with millions of records and thousands of users. One of the points I'm examining is database concurrency. Sql Server uses pessimistic concurrency by default--good for a write-heavy app. If my app is read-heavy, I might switch it to optimistic concurrency (isolation level: read uncommitted snapshot) like Jeff Atwood did with StackOverflow.

    Read the article

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • How do I merge a local branch into TFS

    - by Johnny
    hi, I did a stupid thing and branched my project on my local disk instead of doing it on the TFS. So now I have two projects on my disk: the old one which has TFS bindings and the new, which doesn't. I want to merge those changes back into the TFS project. How would I go about doing that? I can't do Compare because my local branch has no TFS bindings. There should be some way to compare the differences between the two projects locally and then meld the differences into the old project and check-in, but I can't find an easy way of doing that. Any other solutions?

    Read the article

  • use hg to synchronize my project between my two computer

    - by hguser
    Hi: I have two computer : the desktop in my company and the portable computer in my home. Now I want to use the hg to synchronize the project between them using a "USB removable disk". So I wonder how to implement it? THe pro in my desktop is : D:\work\mypro. I use the following command to init it: hg init Then I connect to the USB disk whose volume label is "H",and get a clone using: cd H: hg init hg clone D:\work\mypro mypro-usb ANd in my portable computer I use: cd D: hg clone H:\mypro-usb mypro-home However I do not know how to do if I modify some files(remove or add and modify) in the mypro-home,how to make the mypro-usb changed synchronizely,also I want the mypro in my desktop synchronizely. How to do it?

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Entity Framework VS LINQ to SQL VS ADO.NET with stored procedures?

    - by BritishDeveloper
    How would you rate each of them in terms of: Performance Speed of development Neat, intuitive, maintainable code Flexibility Overall I like my SQL and so have always been a die-hard fan of ADO.NET and stored procedures but I recently had a play with Linq to SQL and was blown away by how quickly I was writing out my DataAccess layer and have decided to spend some time really understanding either Linq to SQL or EF... or neither? I just want to check, that there isn't a great flaw in any of these technologies that would render my research time useless. E.g. performance is terrible, it's cool for simple apps but can only take you so far

    Read the article

  • Red Hat cluster: Failure of one of two services sharing the same virtual IP tears down IP

    - by js.
    I'm creating a 2+1 failover cluster under Red Hat 5.5 with 4 services of which 2 have to run on the same node, sharing the same virtual IP address. One of the services on each node needs a (SAN) disk, the other doesn't. I'm using HA-LVM. When I shut down (via ifdown) the two interfaces connected to the SAN to simulate SAN failure, the service needing the disk is disabled, the other keeps running, as expected. Surprisingly (and unfortunately), the virtual IP address shared by the two services on the same machine is also removed, rendering the still-running service useless. How can I configure the cluster to keep the IP address up?

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >