Search Results

Search found 3828 results on 154 pages for 'mathematical optimization'.

Page 46/154 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • AB failed requests - What can I do about them?

    - by matthewsteiner
    So, in the past I've never had any problems with this app. All benchmarks had 100% success rate. Yesterday I set up nginx to server static content and pass on other requests to apache. Now, if I have 1 concurrent user (-c 1) then everything is fine. But it seems the more concurrent users I have, the more failed requests I get. Not a lot, but maybe about 10 or 15 out of 350. They're "length", whatever that means. Visiting the website with a browser, I don't have any problems at all. How can I find out the cause of these failed requests?

    Read the article

  • Options to efficiently synchronize 1 million files with remote servers?

    - by Zilvinas
    At a company I work for we have such a thing called "playlists" which are small files ~100-300 bytes each. There's about a million of them. About 100,000 of them get changed every hour. These playlists need to be uploaded to 10 other remote servers on different continents every hour and it needs to happen quick in under 2 mins ideally. It's very important that files that are deleted on the master are also deleted on all the replicas. We currently use Linux for our infrastructure. I was thinking about trying rsync with the -W option to copy whole files without comparing contents. I haven't tried it yet but maybe people who have more experience with rsync could tell me if it's a viable option? What other options are worth considering?

    Read the article

  • Optimize Windows file access over network

    - by Djizeus
    At my company I frequently need to access shared files over a Windows network. These files are located on the other side of the planet, so I guess the file share goes through some kind of VPN over Internet, but I don't control this and it is supposed to be "transparent" for me. However it is extremely slow. Displaying the content of a directory in the file explorer takes about 10s. Even if over the Internet, I did not expect that retrieving a list of file names would be that long. Are there any settings to optimize this from my Windows XP workstation, or is it mostly related to the way the network is configured? The only thing I have found so far is to cache all file names, while by default only short file names are cached (http://support.microsoft.com/kb/843418).

    Read the article

  • Nginx as a reverse proxy + Apache or completely cut out Apache? (WordPress Multisite)

    - by user715564
    I am starting a WordPress multisite network with domain mapping and I am trying to think through my server set up. Right now, I only have one medium sized VPS but hopefully I will need to add more servers later so ideally the solution will also accommodate future growth. My question is, would it be better to set up Nginx as a reverse proxy with Apache or use only Nginx? It seems like setting up Nginx as a reverse proxy would be easier and offer less of a possibility of problems but, on the other hand, would using only Nginx add substantial benefits?

    Read the article

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • compiling the linux kernel

    - by user482819
    Just for learning, I have recompiled the linux kernel with different options, installed and boot from it. It was both instructive and straight forward. However, I was overwhelmed by the big number of options available. My questions are: 1.- Does it make sense to spend time trying to optimize the linux kernel for my particular laptop? Will it make a significant improvement? 2.- Is there any tool that can read the configuration of my computer and suggest a config? Thanks, H

    Read the article

  • My system is always disk-bound (the disk light is always on). Why is this?

    - by Scoobie
    I have been given a laptop by the good folks at my company on which to do my work (Java development). I usually use eclipse as my primary development platform. The laptop is a Dell D830 and runs Windows 7 - 32 bit. Although the processor supports a 64 bit instruction-set, licensing limits me to running the 32 bit OS. The HDD is a WD1600BEVT (Western Digital). I have noticed that my disk is always very slow. Windows start up is usually pretty quick, however as soon as I log on, my disk light stays on and usually, the laptop takes about 4 minutes (after logging in -- immediately upon getting the prompt to press Ctrl + Alt + Del to log in) before it's usable. Questions: Is this expected behavior? What can I do to examine the disk and determine the cause of the problem? What can I do to improve my disk's performance? Any optimizations you may be able to suggest? Other Questions: Some have suggested running Process Monitor (from sysinternals), but how would i get the log since start up? Instead of trying to fix this myself, should I simply push this onto the system administrator? Thanks all.

    Read the article

  • Speed up MySQL for inserts (for testing purposes)

    - by Alex N
    I have a bit of software that needs to do a lot of INSERTs. In production environment there'll be some serious tweaking and testing and stuff like that, but now when I need to test it I'd like to speed up inserts as much as possible. Hence my question - is there a way to tweak mysql such that it doesn't do much disk I/O but keeps everything in RAM and syncs with disk rarely(like once n-seconds say?)

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Help with optimizing C# function via C and/or Assembly

    - by MusiGenesis
    I have this C# method which I'm trying to optimize: // assume arrays are same dimensions private void DoSomething(int[] bigArray1, int[] bigArray2) { int data1; byte A1; byte B1; byte C1; byte D1; int data2; byte A2; byte B2; byte C2; byte D2; for (int i = 0; i < bigArray1.Length; i++) { data1 = bigArray1[i]; data2 = bigArray2[i]; A1 = (byte)(data1 >> 0); B1 = (byte)(data1 >> 8); C1 = (byte)(data1 >> 16); D1 = (byte)(data1 >> 24); A2 = (byte)(data2 >> 0); B2 = (byte)(data2 >> 8); C2 = (byte)(data2 >> 16); D2 = (byte)(data2 >> 24); A1 = A1 > A2 ? A1 : A2; B1 = B1 > B2 ? B1 : B2; C1 = C1 > C2 ? C1 : C2; D1 = D1 > D2 ? D1 : D2; bigArray1[i] = (A1 << 0) | (B1 << 8) | (C1 << 16) | (D1 << 24); } } The function basically compares two int arrays. For each pair of matching elements, the method compares each individual byte value and takes the larger of the two. The element in the first array is then assigned a new int value constructed from the 4 largest byte values (irrespective of source). I think I have optimized this method as much as possible in C# (probably I haven't, of course - suggestions on that score are welcome as well). My question is, is it worth it for me to move this method to an unmanaged C DLL? Would the resulting method execute faster (and how much faster), taking into account the overhead of marshalling my managed int arrays so they can be passed to the method? If doing this would get me, say, a 10% speed improvement, then it would not be worth my time for sure. If it was 2 or 3 times faster, then I would probably have to do it. Note: please, no "premature optimization" comments, thanks in advance. This is simply "optimization".

    Read the article

  • Algorithm for optimally choosing actions to perform a task

    - by Jules
    There are two data types: tasks and actions. An action costs a certain time to complete, and a set of tasks this actions consists of. A task has a set of actions, and our job is to choose one of them. So: class Task { Set<Action> choices; } class Action { float time; Set<Task> dependencies; } For example the primary task could be "Get a house". The possible actions for this task: "Buy a house" or "Build a house". The action "Build a house" costs 10 hours and has the dependencies "Get bricks" and "Get cement", etcetera. The total time is the sum of all the times of the actions required to perform. We want to choose actions such that the total time is minimal. Note that the dependencies can be diamond shaped. For example "Get bricks" could require "Get a car" (to transport the bricks) and "Get cement" would also require a car. Even if you do "Get bricks" and "Get cement" you only have to count the time it takes to get a car once. Note also that the dependencies can be circular. For example "Money" - "Job" - "Car" - "Money". This is no problem for us, we simply select all of "Money", "Job" and "Car". The total time is simply the sum of the time of these 3 things. Mathematical description: Let actions be the chosen actions. valid(task) = ?action ? task.choices. (action ? actions ? ?tasks ? action.dependencies. valid(task)) time = sum {action.time | action ? actions} minimize time subject to valid(primaryTask)

    Read the article

  • Al Zimmermann's Son of Darts

    - by polygenelubricants
    There's about 2 months left in Al Zimmermann's Son of Darts programming contest, and I'd like to improve my standing (currently in the 60s) to something more respectable. I'd like to get some ideas from the great community of stackoverflow on how best to approach this problem. The contest problem is known as the Global Postage Stamp Problem in literatures. I don't have much experience with optimization algorithms (I know of hillclimbing and simulated annealing in concept only from college), and in fact the program that I have right now is basically sheer brute force, which of course isn't feasible for the larger search spaces. Here are some papers on the subject: A Postage Stamp Problem (Alter & Barnett, 1980) Algorithms for Computing the h-Range of the Postage Stamp Problem (Mossige, 1981) A Postage Stamp Problem (Lunnon, 1986) Two New Techniques for Computing Extremal h-bases Ak (Challis, 1992) Any hints and suggestions are welcome. Also, feel free to direct me to the proper site if stackoverflow isn't it.

    Read the article

  • Error: Can't find common super class of ...

    - by PatlaDJ
    I am trying to process with Proguard a MS Windows desktop application (Java 6 SE using the SWT lib provided by Eclipse). And I get the following critical error: Unexpected error while performing partial evaluation: Class = [org/eclipse/swt/widgets/DateTime] Method = [<init>(Lorg/eclipse/swt/widgets/Composite;I)V] Exception = [java.lang.IllegalArgumentException] (Can't find common super class of [java/lang/StringBuffer] and [org/eclipse/swt/internal/win32/TCHAR]) Error: Can't find common super class of [java/lang/StringBuffer] and [org/eclipse/swt/internal/win32/TCHAR] ---------------------------- When I tried to Google the error, it came out only on two spots on the entire web, that astonished me greatly. I am newbie using Proguard and Java code optimization tools at all. Any thoughts and suggestions how to fix this, will be appreciated. Thanks in advance.

    Read the article

  • what is the best way to generate fake data for classification problem ?

    - by Berkay
    i'm working on a project and i have a subset of user's key-stroke time data.This means that the user makes n attempts and i will use these recorded attempt time data in various kinds of classification algorithms for future user attempts to verify that the login process is done by the user or some another person. (Simply i can say that this is biometrics) I have 3 different times of the user login attempt process, ofcourse this is subset of the infinite data. until now it is an easy classification problem, i decided to use WEKA but as far as i understand i have to create some fake data to feed the classification algorithm. can i use some optimization algorithms ? or is there any way to create this fake data to get min false positives ? Thanks

    Read the article

  • Is there any way to get MSVC to pass structs arguments in registers on x64?

    - by Luke
    For a function with signature: struct Pair { void *v1, *v2 }; void f(Pair p); compiled on x64, I would like Pair's fields to be passed via register, as if the function was: void f(void *v1, void *v2); Compiling a test with gcc 4.2.1 for x86_64 on OSX 10.6, I can see this is exactly what happens by examining the disassembly. However, compiling with MSVC 2008 for x64 on Windows, the disassembly shows that Pair is passed on the stack. I understand that platform ABIs can prevent this optimization; does anyone know any MSVC-specific annotations, calling conventions, flags, or other hacks that can get this to work? Thank you!

    Read the article

  • How to optimize MATLAB loops?

    - by striglia
    I have been working lately on a number of iterative algorithms in MATLAB, and been getting hit hard by MATLAB's performance (or lack thereof) when it comes to loops. I'm aware of the benefit of vectorizing code when possible, but are there any tools for optimization when you need the loop for your algorithm? I am aware of the MEX-file option to write small subroutines in C/C++, although given my algorithms, this can be a very painful option given the data structures required. I mainly use MATLAB for the simplicity and speed of prototyping, so a syntactically complex, statically typed language is not ideal for my situation. Are there any other suggestions? Even other languages (python?) which have relatively painless matrix tools are an option.

    Read the article

  • Tips for optimizing C#/.NET programs

    - by Bob
    It seems like optimization is a lost art these days. Wasn't there a time when all programmers squeezed every ounce of efficiency from their code? Often doing so while walking 5 miles in the snow? In the spirit of bringing back a lost art, what are some tips that you know of for simple (or perhaps complex) changes to optimize C#/.NET code? Since it's such a broad thing that depends on what one is trying to accomplish it'd help to provide context with your tip. For instance: When concatenating many strings together use StringBuilder instead. If you're only concatenating a handful of strings it's ok to use the + operator. Use string.Compare to compare 2 strings instead of doing something like string1.ToLower() == string2.ToLower()

    Read the article

  • Lucene.Net memory consumption and slow search when too many clauses used

    - by Umer
    I have a DB having text file attributes and text file primary key IDs and indexed around 1 million text files along with their IDs (primary keys in DB). Now, I am searching at two levels. First is straight forward DB search, where i get primary keys as result (roughly 2 or 3 million IDs) Then i make a Boolean query for instance as following +Text:"test*" +(pkID:1 pkID:4 pkID:100 pkID:115 pkID:1041 .... ) and search it in my Index file. The problem is that such query (having 2 million clauses) takes toooooo much time to give result and consumes reallly too much memory.... Is there any optimization solution for this problem ?

    Read the article

  • C++ defines for a 'better' Release mode build in VS

    - by darid
    I currently use the following preprocessor defines, and various optimization settings: WIN32_LEAN_AND_MEAN VC_EXTRALEAN NOMINMAX _CRT_SECURE_NO_WARNINGS _SCL_SECURE_NO_WARNINGS _SECURE_SCL=0 _HAS_ITERATOR_DEBUGGING=0 My question is what other things do fellow SOers use, add, define, in order to get a Release Mode build from VS C++ (2008,2010) to be as performant as possible? btw, I've tried PGO etc, it does help a bit but nothing that comes to parity, also I'm not using streams, the C++ i'm talking about its more like C but making use of templates and STL algorithms. As it stands now very simple code segments flop when compared to what GCC produces on say an equivalent x86 machine running linux (2.6+ kernel) using 02. Side-Note: I believe a lot of the issues relate directly to the STL version (Dinkum) provided by MS. Could people please elaborate on experiences using STLPort etc with VS C++.

    Read the article

  • Optimize C# Code Fragment

    - by Eric J.
    I'm profiling some C# code. The method below is one of the most expensive ones. For the purpose of this question, assume that micro-optimization is the right thing to do. Is there an approach to improve performance of this method? Changing the input parameter to p to ulong[] would create a macro inefficiency. static ulong Fetch64(byte[] p, int ofs = 0) { unchecked { ulong result = p[0 + ofs] + ((ulong)p[1 + ofs] << 8) + ((ulong)p[2 + ofs] << 16) + ((ulong)p[3 + ofs] << 24) + ((ulong)p[4 + ofs] << 32) + ((ulong)p[5 + ofs] << 40) + ((ulong)p[6 + ofs] << 48) + ((ulong)p[7 + ofs] << 56); return result; } }

    Read the article

  • jQuery Optimizations

    - by aepheus
    I've just come to the end of a large development project. We were on a tight timeline, so a lot of optimization was "deferred". Now that we met our deadline, we're going back and trying to optimize things. My questions is this: What are some of the most important things you look for when optimizing jQuery web sites. Alternately I'd love to hear of sites/lists that have particularly good advise for optimizing jQuery. I've already read a few articles, http://www.tvidesign.co.uk/blog/improve-your-jquery-25-excellent-tips.aspx was an especially good read.

    Read the article

  • Is it possible to shrink rt.jar with ProGuard?

    - by PatlaDJ
    Is there a procedure by which you can optimize/shrink/select/obfuscate only 'used by your app' classes/methods/fields from rt.jar provided by Sun by using some optimization software like ProGuard (or maybe other?). Then you would actually be able to minimize the download size of your application considerably and make it much more secure ? Right? Related questions: Do you know if Sun's "jigsaw project" which is waited to come out, is intended to automatically handle this particular issue? Did somebody manage yet to form an opinion about Avian java alternative? Please share it here. Thank you.

    Read the article

  • In C, would !~b ever be faster than b == 0xff ?

    - by James Morris
    From a long time ago I have a memory which has stuck with me that says comparisons against zero are faster than any other value (ahem Z80). In some C code I'm writing I want to skip values which have all their bits set. Currently the type of these values is char but may change. I have two different alternatives to perform the test: if (!~b) /* skip */ and if (b == 0xff) /* skip */ Apart from the latter making the assumption that b is an 8bit char whereas the former does not, would the former ever be faster due to the old compare to zero optimization trick, or are the CPUs of today way beyond this kind of thing?

    Read the article

  • Compile-time trigonometry in C

    - by lhahne
    I currently have code that looks like while (very_long_loop) { ... y1 = getSomeValue(); ... x1 = y1*cos(PI/2); x2 = y2*cos(SOME_CONSTANT); ... outputValues(x1, x2, ...); } the obvious optimization would be to compute the cosines ahead-of-time. I could do this by filling an array with the values but I was wondering would it be possible to make the compiler compute these at compile-time?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >