Search Results

Search found 3512 results on 141 pages for 'premature optimization'.

Page 41/141 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Does anyone still use Iometer?

    - by Brian T Hannan
    "Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It is used as a benchmark and troubleshooting tool and is easily configured to replicate the behaviour of many popular applications." link text Does anyone still use this tool? It seems helpful, but I'm not sure if it's for the thing I am trying to work on. I am trying create a benchmark computer performance test that can be run before and after a Windows Optimization program does its stuff (ex: PC Optimizer Pro or CCleaner). I want to be able to make a quick statement like CCleaner makes the computer run 50% faster or something along those lines. Are there any newer tools like this one?

    Read the article

  • What are the practical differences between an IP address and a server?

    - by JMC Creative
    My understanding of IPs and other DNS-type server-related issues really falls short (read: exteme noob). I know a dedicated server would increase speed. What, if any, difference in speed would a dedicated IP make? Am I correct in understanding the Best Practices from Yahoo that I could use the second IP to serve up some content, which would increase the number of parallel downloads for the user? Or are both IPs (purchase from same hosting account) going to point to the same server? Or how does it work? Are there other optimization things I should be aware of when thinking of purchasing a dedicated IP? Clarification I am talking about the speed of serving the webpages, i.e. the speed of my website. Yes, I know that IP and server are completely different, not even opposites, just different. But this, indeed, is my question! The Question Reformulated: Will having a second (dedicated) IP on my website speed up the time that it will load and display for the user? Or does that have nothing at all to do with IP, and is only a server issue? I'm sorry if this is still unclear. This is a real question though, I may just not be wording it well.

    Read the article

  • .NET Runtime Optimization Service

    - by Velika
    I see that the Service ".NET Runtime Optimization Service v2.0.50727_X86" is disabled C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorsvw.exe I guess I probably did that, not sure. Do I need it/Should it be running?

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

  • Using optimization to assign by preference

    - by Aarthi
    I have 100 objects ("candies") that I need to distribute between five people so that each has an equal number of candies (in this case, 20 candies per person). However, each person has also expressed their preferences of candy to me in a chart, similar to below. Top-favored candies receive 10 points, least-favored candies receive -10 points, and neutral-favored candies receive 0.5 points. I need to sort the items out so that: Each person receives the same number of candies Each person's total "satisfaction" (points) is maximized My output is a list of each person's assigned items I'm familiar with Excel's in-house Monte Carlo simulation tools (Solver, F9 diceroll, etc) and would like to stick to those tools. While I know how to set up the chart, and how to use the column summation to input into Solver, I don't know how to get it to give me the desired output. Furthermore, how do I adjust the solver so it takes into account individual preferences rather than empirical ones? To wit: how do I begin setting up this model?

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Should we regularly schedule mysqlcheck (or databsae optimization)

    - by scatteredbomb
    We run a forum with some 2 million posts and I've noticed that if left untouched the overhead in the mySQL (as listed in phpMyAdmin) can get quite large (hundreds of megabytes). I'm wondering if scheduling a normal mysqlcheck to optimize the tables is good practice? Any reason not to do it, say, once a week at an off-peak hour? There was a time over the summer where our site was constantly crashing because mysql was using up all resources. That's when I noticed the huge amount of overhead and optimized the database and haven't had any problems since then with stability. I figured if that was helping alleviate the issues, I should just setup a cron to automatically do this.

    Read the article

  • How efficient is PHP's substr?

    - by zildjohn01
    I'm writing a parser in PHP which must be able to handle large in-memory strings, so this is a somewhat important issue. (ie, please don't "premature optimize" flame me, please) How does the substr function work? Does it make a second copy of the string data in memory, or does it reference the original? Should I worry about calling, for example, $str = substr($str, 1); in a loop?

    Read the article

  • mysql inserts & updates optimized

    - by user271619
    This is an optimization question, mostly. I have many forms on my sites that do simple Inserts and Updates. (Nothing complicated) But, several of the form's input fields are not necessary and may be left empty. (again, nothing complicated) However, my SQL query will have all columns in the Statement. My question, is it best to optimize the Inserts/Update queries appropriately? And only apply the columns that are changed into the query? We all hear that we shouldn't use the "SELECT *" query, unless it's absolutely needed for displaying all columns. But what about Inserts & Updates? Hope this makes sense. I'm sure any amount of optimization is acceptable. But I never really hear about this, specifically, from anyone.

    Read the article

  • Windows 7 : mise à jour du Microsoft Desktop Optimization Pack, le pack de virtualisation et de déploiement

    Windows 7 : mise à jour de Microsoft Desktop Optimization Pack Le pack de virtualisation et de déploiement Microsoft vient de procéder à une mise à jour de son pack de solutions de déploiement et de virtualisation Microsoft Destop Optimization Pack (MDOP) La mise à jour de MDOP porte principalement sur MED-V (Microsoft Enterprise Desktop Virtualisation) qui est désormais disponible en version 2.0 et sur APP-V 4 dont le Service Pack 1 est désormais disponible. Le SP1 de Microsoft APP-V 4 rend le processus de virtualisation des applications plus facile et plus rapide grâce à l'intégration du « package ...

    Read the article

  • Tips For Choosing Best Search Engine Optimization Services For Your Company!

    With the profusion of companies offering SEO optimization services out there, it is therefore important that you know what to look out for in making a choice as to which company to choose. In making a choice you have to know the salient elements that make an ideal company that offers SEO optimization services, armed with this information you will be better equipped in making the right choice for your organization.

    Read the article

  • How Does a Landing Page Affect Search Engine Optimization?

    Search engine optimization or SEO for short is one of the many those new to online commercial have to deal with when setting up their web presence. One SEO related question that seems to come up frequently is "How does a landing page affect search engine optimization, and if so to what degree"? The short answer is, yes it does.

    Read the article

  • python- scipy optimization

    - by pear
    In scipy fmin_slsqp (Sequential Least Squares Quadratic Programming), I tried reading the code 'slsqp.py' provided with the scipy package, to find what are the criteria to get the exit_modes 0? I cannot find which statements in the code produce this exit mode? Please help me 'slsqp.py' code as follows, exit_modes = { -1 : "Gradient evaluation required (g & a)", 0 : "Optimization terminated successfully.", 1 : "Function evaluation required (f & c)", 2 : "More equality constraints than independent variables", 3 : "More than 3*n iterations in LSQ subproblem", 4 : "Inequality constraints incompatible", 5 : "Singular matrix E in LSQ subproblem", 6 : "Singular matrix C in LSQ subproblem", 7 : "Rank-deficient equality constraint subproblem HFTI", 8 : "Positive directional derivative for linesearch", 9 : "Iteration limit exceeded" } def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_eqcons=None, fprime_ieqcons=None, args = (), iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, epsilon = _epsilon ): # Now do a lot of function wrapping # Wrap func feval, func = wrap_function(func, args) # Wrap fprime, if provided, or approx_fprime if not if fprime: geval, fprime = wrap_function(fprime,args) else: geval, fprime = wrap_function(approx_fprime,(func,epsilon)) if f_eqcons: # Equality constraints provided via f_eqcons ceval, f_eqcons = wrap_function(f_eqcons,args) if fprime_eqcons: # Wrap fprime_eqcons geval, fprime_eqcons = wrap_function(fprime_eqcons,args) else: # Wrap approx_jacobian geval, fprime_eqcons = wrap_function(approx_jacobian, (f_eqcons,epsilon)) else: # Equality constraints provided via eqcons[] eqcons_prime = [] for i in range(len(eqcons)): eqcons_prime.append(None) if eqcons[i]: # Wrap eqcons and eqcons_prime ceval, eqcons[i] = wrap_function(eqcons[i],args) geval, eqcons_prime[i] = wrap_function(approx_fprime, (eqcons[i],epsilon)) if f_ieqcons: # Inequality constraints provided via f_ieqcons ceval, f_ieqcons = wrap_function(f_ieqcons,args) if fprime_ieqcons: # Wrap fprime_ieqcons geval, fprime_ieqcons = wrap_function(fprime_ieqcons,args) else: # Wrap approx_jacobian geval, fprime_ieqcons = wrap_function(approx_jacobian, (f_ieqcons,epsilon)) else: # Inequality constraints provided via ieqcons[] ieqcons_prime = [] for i in range(len(ieqcons)): ieqcons_prime.append(None) if ieqcons[i]: # Wrap ieqcons and ieqcons_prime ceval, ieqcons[i] = wrap_function(ieqcons[i],args) geval, ieqcons_prime[i] = wrap_function(approx_fprime, (ieqcons[i],epsilon)) # Transform x0 into an array. x = asfarray(x0).flatten() # Set the parameters that SLSQP will need # meq = The number of equality constraints if f_eqcons: meq = len(f_eqcons(x)) else: meq = len(eqcons) if f_ieqcons: mieq = len(f_ieqcons(x)) else: mieq = len(ieqcons) # m = The total number of constraints m = meq + mieq # la = The number of constraints, or 1 if there are no constraints la = array([1,m]).max() # n = The number of independent variables n = len(x) # Define the workspaces for SLSQP n1 = n+1 mineq = m - meq + n1 + n1 len_w = (3*n1+m)*(n1+1)+(n1-meq+1)*(mineq+2) + 2*mineq+(n1+mineq)*(n1-meq) \ + 2*meq + n1 +(n+1)*n/2 + 2*m + 3*n + 3*n1 + 1 len_jw = mineq w = zeros(len_w) jw = zeros(len_jw) # Decompose bounds into xl and xu if len(bounds) == 0: bounds = [(-1.0E12, 1.0E12) for i in range(n)] elif len(bounds) != n: raise IndexError, \ 'SLSQP Error: If bounds is specified, len(bounds) == len(x0)' else: for i in range(len(bounds)): if bounds[i][0] > bounds[i][1]: raise ValueError, \ 'SLSQP Error: lb > ub in bounds[' + str(i) +'] ' + str(bounds[4]) xl = array( [ b[0] for b in bounds ] ) xu = array( [ b[1] for b in bounds ] ) # Initialize the iteration counter and the mode value mode = array(0,int) acc = array(acc,float) majiter = array(iter,int) majiter_prev = 0 # Print the header if iprint >= 2 if iprint >= 2: print "%5s %5s %16s %16s" % ("NIT","FC","OBJFUN","GNORM") while 1: if mode == 0 or mode == 1: # objective and constraint evaluation requird # Compute objective function fx = func(x) # Compute the constraints if f_eqcons: c_eq = f_eqcons(x) else: c_eq = array([ eqcons[i](x) for i in range(meq) ]) if f_ieqcons: c_ieq = f_ieqcons(x) else: c_ieq = array([ ieqcons[i](x) for i in range(len(ieqcons)) ]) # Now combine c_eq and c_ieq into a single matrix if m == 0: # no constraints c = zeros([la]) else: # constraints exist if meq > 0 and mieq == 0: # only equality constraints c = c_eq if meq == 0 and mieq > 0: # only inequality constraints c = c_ieq if meq > 0 and mieq > 0: # both equality and inequality constraints exist c = append(c_eq, c_ieq) if mode == 0 or mode == -1: # gradient evaluation required # Compute the derivatives of the objective function # For some reason SLSQP wants g dimensioned to n+1 g = append(fprime(x),0.0) # Compute the normals of the constraints if fprime_eqcons: a_eq = fprime_eqcons(x) else: a_eq = zeros([meq,n]) for i in range(meq): a_eq[i] = eqcons_prime[i](x) if fprime_ieqcons: a_ieq = fprime_ieqcons(x) else: a_ieq = zeros([mieq,n]) for i in range(mieq): a_ieq[i] = ieqcons_prime[i](x) # Now combine a_eq and a_ieq into a single a matrix if m == 0: # no constraints a = zeros([la,n]) elif meq > 0 and mieq == 0: # only equality constraints a = a_eq elif meq == 0 and mieq > 0: # only inequality constraints a = a_ieq elif meq > 0 and mieq > 0: # both equality and inequality constraints exist a = vstack((a_eq,a_ieq)) a = concatenate((a,zeros([la,1])),1) # Call SLSQP slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw) # Print the status of the current iterate if iprint > 2 and the # major iteration has incremented if iprint >= 2 and majiter > majiter_prev: print "%5i %5i % 16.6E % 16.6E" % (majiter,feval[0], fx,linalg.norm(g)) # If exit mode is not -1 or 1, slsqp has completed if abs(mode) != 1: break majiter_prev = int(majiter) # Optimization loop complete. Print status if requested if iprint >= 1: print exit_modes[int(mode)] + " (Exit mode " + str(mode) + ')' print " Current function value:", fx print " Iterations:", majiter print " Function evaluations:", feval[0] print " Gradient evaluations:", geval[0] if not full_output: return x else: return [list(x), float(fx), int(majiter), int(mode), exit_modes[int(mode)] ]

    Read the article

  • Performance optimization for SQL Server: decrease stored procedures execution time or unload the ser

    - by tim
    We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites.

    Read the article

  • Optimization Techniques in Python

    - by fear-matrix
    Recently i have developed a billing application for my company with Python/Django. For few months everything was fine but now i am observing that the performance is dropping because of more and more users using that applications. Now the problem is that the application is now very critical for the finance team. Now the finance team are after my life for sorting out the performance issue. I have no other option but to find a way to increase the performance of the billing application. So do you guys know any performance optimization techniques in python that will really help me with the scalability issue

    Read the article

  • Sharepoint Web performance optimization

    - by hertzel
    We are running on SSL on following server topology: 1 ISA (SSL Terminate/cache/proxy+AD authentication) 1 Sharepoint 1 IBM DB2 Database as enterprise/corporate DB 1 MS SQL Server as local DB We have recently optimized the caching, compression, minification, and other ASP.net best practices such as viewstate and cookie sizes, minimizing round trips, parallel connections/domain sharding and a lot more.... Now we are not convinced that the we are in an optimized position as the network resources i.e. bandwidth and especially latency are out of our control!! The client/browser to server/sharepoint is trans-Atlantic i.e. (ASIA, USA, EUROPE). As of my understanding the only ways to improve the network (latency) are: - TCP/SSL optimization - hardware/software? - CDNs - cloud or our own ? Your opinion and insights would be much appreciated Best regards Hertzel

    Read the article

  • JAVA bytecode optimization

    - by Idob
    This is a basic question. I have code which shouldn't run on metadata beans. All metadata beans are located under metadata package. Now, I use reflection API to find out whether a class is located in the the metadata package. if (newEntity.getClass().getPackage().getName().contains("metadata")) I use this If in several places within this code. The question is: Should I do this once with: boolean isMetadata = false if (newEntity.getClass().getPackage().getName().contains("metadata")) { isMetadata = true; } C++ makes optimizations and knows that this code was already called and it won't call it again. Does JAVA makes optimization? I know reflection API is a beat heavy and I prefer not to lose expensive runtime.

    Read the article

  • Creating an object in the loop

    - by Jacob
    std::vector<double> C(4); for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } is much faster than for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { std::vector<double> C(4); C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } I realize this happens because std::vector is repeatedly being created and instantiated in the loop, but I was under the impression this would be optimized away. Is it completely wrong to keep variables local in a loop whenever possible? I was under the (perhaps false) impression that this would provide optimization opportunities for the compiler. EDIT: I use VC++2005 (release mode) with full optimization (/Ox)

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >