Search Results

Search found 5602 results on 225 pages for 'tuning red gate'.

Page 14/225 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • C programming getting back into it - the red pill

    - by JavaRocky
    Can someone provide recommended reading, website resources or best practices to follow when programming with C. I am a proficient software developer with strong skills in Java and PHP. Is there standard libraries these days which people use? Like what spring is to java? And standard design patterns for managing memory or even standard libraries for that fact? I want to write solid, maintainable C programs. GO THE RED PILL! :P

    Read the article

  • Background not changing to red

    - by Doug
    http://dougymak.com/jquery/# First time playing with jQuery, and kind of stuck already. I'm expecting the background to change to red on hover, but it's not for some reason. Can anyone give me a hand? Thanks!

    Read the article

  • get rid of red X in IE for non-existing images

    - by hubertg
    I have a 3rd website (Confluence) which references images which are secured by a login. If the current user is logged in the image is shown if not the image url would redirect to a login form. Example When you enter this url in the browser a redirect to the login page is done. The problem now: IE shows a the dreaded red X icon for the image even though there should be just nothing (like in Firefox). Anyone knows how to get around this?

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Tuning recommendations from mysqltuner.pl: query_cache_limit

    - by knorv
    The mysqltuner.pl script gives me the following recommendation: query_cache_limit (> 1M, or use smaller result sets) And MySQL status output shows: mysql> SHOW STATUS LIKE 'Qcache%'; +-------------------------+------------+ | Variable_name | Value | +-------------------------+------------+ | Qcache_free_blocks | 12264 | | Qcache_free_memory | 1001213144 | | Qcache_hits | 3763384 | | Qcache_inserts | 54632419 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 6656246 | | Qcache_queries_in_cache | 55280 | | Qcache_total_blocks | 122848 | +-------------------------+------------+ 8 rows in set (0.00 sec) From the status output above, how can I judge whether or nor the suggested increase in query_cache_limit is needed?

    Read the article

  • Tuning OS X Virtual Memory

    - by dcolish
    I've noticed some really odd results form vm_stat on OSX 10.6. According to this, its barely hitting the cache. Searches of pretty much everywhere I could think of turn up little to explain why the rate is so low. I asked a few friends and they're seeing the same thing. What gives and how can I make it better? Mach Virtual Memory Statistics: (page size of 4096 bytes) Pages free: 78609. Pages active: 553411. Pages inactive: 191116. Pages speculative: 6198. Pages wired down: 153998. "Translation faults": 116031508. Pages copy-on-write: 2274338. Pages zero filled: 33360804. Pages reactivated: 264378. Pageins: 1197683. Pageouts: 43756. Object cache: 20 hits of 1550639 lookups (0% hit rate)

    Read the article

  • Sphinx search distributed index tuning

    - by Andriy Bohdan
    I'm deciding how to split 3 large sphinx indexes between 3 servers. Each of the 3 indexes is searched separately. What's more effective: to host each index on separate machine Example machine1 - index1 machine2 - index2 machine3 - index3 or to split each index into 3 parts and host each part of the same index on separate machine. Example machine1 - index1_chunk1, index2_chunk1, index3_chunk1 machine2 - index1_chunk2, index2_chunk2, index3_chunk2 machine3 - index1_chunk3, index2_chunk3, index3_chunk3 ?

    Read the article

  • Tuning SQL Server 2008 for web applications

    - by Kibbee
    In one of the Stackoverflow podcasts, I remember Jeff Atwood saying that there was a configuration option in SQL Server 2008 which cuts down on locking, and was kind of an alternative to using "with (nolock)" in all your queries. Does anybody know how to enable the feature he was talking about, possibly even Jeff himself. I'm looking at deploying SQL Server 2008, and want to see if using a feature like this would help out my web application.

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • Need help tuning a SQL statement

    - by jeffself
    I've got a table that has two fields (custno and custno2) that need to be searched from a query. I didn't design this table, so don't scream at me. :-) I need to find all records where either the custno or custno2 matches the value returned from a query on the same table based on a titleno. In other words, the user types in 1234 for the titleno. My query searches the table to find the custno associated with the titleno. It also looks for the custno2 for that titleno. Then it needs to do a search on the same table for all other records that have either the custno or custno2 returned in the previous search in the custno or custno2 fields for those other records. Here is what I've come up with: SELECT BILLYR, BILLNO, TITLENO, VINID, TAXPAID, DUEDATE, DATEPIF, PROPDESC FROM TRCDBA.BILLSPAID WHERE CUSTNO IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') OR CUSTNO2 IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') The query takes about 5-10 seconds to return data. Can it be rewritten to work faster?

    Read the article

  • Tuning garbage collections for low latency

    - by elec
    I'm looking for arguments as to how best to size the young generation (with respect to the old generation) in an environment where low latency is critical. My own testing tends to show that latency is lowest when the young generation is fairly large (eg. -XX:NewRatio <3), however I cannot reconcile this with the intuition that the larger the young generation the more time it should take to garbage collect. The application runs on linux, jdk 6 before update 14, i.e G1 not available.

    Read the article

  • Recommendations on apache tuning for chat application

    - by sofia
    Hi, I have a chat application (xmpp / muc) that is going to be served by apache (we might change to nginx later but right now it's not easily done). If a user is in 2 rooms, he'll have between 2 and 4 active connections to the server (long-polling connections), so if we have 200 users per room and we have 5 rooms, what should the ServerLimit, MaxClients be set to? For example, these are the default values: ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 Thanks,

    Read the article

  • Are the ususal database performance-tuning tips invalide for a third-party app like Drupal

    - by Paul Strugger
    When you have a slow database app, the first suggestions that people make is to: Track the slow queries Add appropriate indexes In the case you are building your own application this is very logical, but when you use a CMS like Drupal, that are people have developed and tuned, is this approach valid? I mean, aren't Drupal tables already fine-tuned for performance? Even if I try to see which queries are the slow ones, what could I do about it? Re-write Drupal core?!?

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • Oracle Enterprise Linux or Red Hat Enterprise Linux?

    - by peturgretars
    I would highly appreciate hearing some opinions regarding the choice of Linux distribution when it comes to setting up an Oracle 11.2.0.3 RAC. We are about to install 2 node Oracle 11.2.0.3 RAC's in data centers A and B. Then we are going to have a standby in B for A and a standby in A for B using Data Guard in ASYNC transmit (long distance). Personally I have more experience with OEL and I know that for example Oracle Smart Flash Cache and zero patching downtime were only supported in OEL 5. I am not sure about OEL 6 vs RHEL 6 though. My question is, which Operating System should we go for and why, Oracle Enterprise Linux 5/6 or Red Hat Enterprise Linux 5/6? The hosting company is unfortunately not supporting OEL at the moment so if OEL is the choice then how would convince the hosting company to start using OEL and supporting it? Thanks so much!

    Read the article

  • TextMate suddenly highlighting all text dark red...?

    - by AP257
    I'm using TextMate on Snow Leopard, don't know much about how it works. After I hit an unknown keyboard shortcut, it suddenly decided to highlight almost all text in my Python files dark red - making all my Python virtually unreadable! I must have accidentally pressed a shortcut - but I've no idea what I did or how to turn it off, and can't find any relevant help in the manual or form. Even just 'turn off all highlighting' would do. Anyone know how to turn this highlighting off? Bit desperate!

    Read the article

  • applicationShouldTerminateAfterLastWindowClosed: does not seem to work when the red x is used to clo

    - by Michael Minerva
    I have a small OSX Cocoa app that just bring up an IKPicutreTaker and saves the picture to a file if one is set. I use applicationShouldTerminateAfterLastWindowClosed: to close the application when the pictureTaker is closed. This all works fine when I either set(this is done when you have picked the picture you want) or when you hit cancel, but when I click on the red arrow in the top left of the windows, the application does not quit when the window is closed this way. Is this intended functionality or am I doing something wrong (not setting some flag?). Also, is there some way to disable this button?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >