Search Results

Search found 9079 results on 364 pages for 'general tuning'.

Page 10/364 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Intel Extreme Tuning utility options are greyed

    - by Abhishek Sha
    I'm having a ASUS K55VM with Intel Core i7 3610QM (IvyBridge) with a NVIDIA GT630M. I'm trying to operate the Intel XTU, but as you can see in the screenshot, all the options are greyed out. Can you please help with this situation. Another are is the CPU Throttling (Intel SpeedStep) which is always shown as 0%. But in the Intel Turbo Monitor, the Speed keeps dynamically changing. Then why is the CPU Throttling always at 0%?:

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Tuning Windows 7 for use in a VM

    - by intuited
    I'm running Windows 7 in a VirtualBox Virtual Machine, and would like to make it run in a more streamlined fashion. I'll be using the install primarily for testing web apps, and have no need for it to run quickly. I would like it to run with minimal memory requirements, and with minimal changes to its virtual hard drive's contents. Changes to the hard drive contents, for example the paging file, result in larger snapshot sizes. Another recent post of mine seems to be related to this issue, but does not directly address issues with Windows. One concern that I have is that Windows seems to be using 17% of its paging file even with over 900MB of memory marked "Standby" or "Free". My uneducated guess is that this is being used to store indexes or some other data that helps to speed up the system but is not really necessary. I'm also wondering if it's normal for Windows to use over 500 MB of "In Use" memory with no apps running. Will this amount decrease if I reduce the amount of "installed" memory in the VM? What steps can I take to reduce the system's memory footprint without incurring an increase in paging file usage?

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • Debian tuning for increasing read/write buffer.

    - by Claudiu
    Is there a way to modify Debian settings so the memory could be used more for disk read/write caching ? I am already using RAID 0 but thats not enough for multiple users, and the disk is almost struggled. Torrents use the disk very much and rTorrent doesn't have cache settings.

    Read the article

  • Fine-tuning a LNMP stack

    - by Norman
    I'm in the process of setting up a server with 4GB RAM and 2 CPUs. The stack will be CentOS + NGINX + MySQL + PHP (with APC) and spawn-fcgi. It will be used to serve 10 Wordpress blogs, 3 of which receive about 20,000 hits per day. Each Wordpress instance is equipped with the W3 TotalCache. I have a few variables to play with: NGINX (How many worker_processes, worker_connections, etc) PHP (What parameters in php.ini should I change? What about apc?) Spawn-fcgi (Right now I have 6 php-cgi spawned. How many of them should I have?) I realize it's hard to tell without testing, but if you could please provide me with some ballpark numbers, that would be helpful too.

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • Exchange Server 2007 message tracking log tuning ?

    - by Albert Widjaja
    Hi All, what is the best practice if I want to have a retention of let say 6 months ? I'm confused which parameter that is should/can be changes. Get-ExchangeServer | where {$_.isHubTransportServer -eq $true} | Get-TransportServer | select Name, *MessageTracking* | ft -AutoSize Name MessageTrackingLogEnabled MessageTrackingLogMaxAge MessageTrackingLogMaxDirectorySize MessageTrackingLogMaxFileSize MessageTrackingLogPat h ---- ------------------------- ------------------------ ---------------------------------- ----------------------------- --------------------- ExHTServer1 True 20.00:00:00 250MB 10MB D:\Program Files\M... ExHTServer2 True 20.00:00:00 250MB 10MB D:\Program Files\M... ExHTServer3 True 20.00:00:00 250MB 10MB D:\Program Files\M... Thanks, Albert

    Read the article

  • Configuration Tuning for PostgreSQL 9.1 PostGIS 1.5 Ubuntu 12.04 Server

    - by Martin
    My server performance is poor. At times SSH, top, and other features or commands are very slow to respond, taking several seconds or more. A query that normally takes 5 minutes can sometimes take 30 minutes. The database is mostly being used to do a spatial query (grid and summarize) on approximately 500GB of stored data spread between 4 tables. Restarting the server works as a temporary fix, but cannot be used as a long term solution. Any suggestions for how to diagnose and solve my performance issues? Hardware and Configuration: 3.3 GHz Intel quad core i5 16 GB DDR3 RAM 6 TB software RAID 10 (6 x 2 TB drives) Ubuntu 12.04 64-bit Postgres 9.1 PostGIS 1.5

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Tuning OpenVZ Containers to work better with Java?

    - by Daniel
    I have a 8 GB RAM Server (Dedicated) and currently have KVM Virtual Machines running on there (successfully) however i'm considering moving to OpenVZ as KVM seems a bit overkill with a lot of overhead for what i use it for. In the past i have used OpenVZ Containers, hosted by myself and from other providers and Java doesn't seem to work well with them.. One example is that if i give a container 2 GB RAM ( No burst) (with or without vswap doesn't matter) a java instance can only be tuned to use at very most 1500 MB of that RAM (-Xmx, -Xms). Ideally, i wish to be able to create "Mini" containers with about 256MB, 512MB, 768 RAM and run some java instances in them. My question is: I'm trying to find an ideal way to tune a OpenVZ container configuration to work better with Java memory. Please, don't suggest anything related to Java settings, i'm looking for OpenVZ specific answers.. Though i welcome any suggestion if you feel it may help me. Much Appreciated, Daniel

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • Tuning MySQL to consume less memory

    - by Alex
    I have a VM which has 2GB Ram, (full specs) And I am setting up a site which has one table in particular with over a million records. There's little or no usage of this particular database (perhaps once or twice a day) but simply running mysql grinds the whole server to a halt. I've looked through the top results but nothing is really denting the CPU however the memory seems to be the issue. The site isnt even live of taking requests yet. the memory situation looks like this: # free -m total used free shared buffers cached Mem: 2006 1880 126 0 3 53 -/+ buffers/cache: 1823 183 Swap: 2047 345 1702 Are there any good pointers to tune mysql to stop hogging the system memory? Thanks very much EDIT: (requested by 8bit): http://tny.cz/b41a0b12

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • EPM 11.1.2 - In WebLogic Server, Enable Native IO Performance Pack

    - by Ahmed Awan
    Performance can be improved by enabling native IO in production mode. WebLogic Server benchmarks show major performance improvements when native performance packs are used on machines that host Oracle WebLogic Server instances. Important Note:  Always enable native I/O, if available, and check for errors at startup to make sure it is being initialed properly. Tip: The use of NATIVE performance packs are enabled by default in the configuration shipped with your distribution. You can use the Administration Console to verify that performance packs are enabled by clicking on each managed server and click on Tuning tab.

    Read the article

  • Ubuntu - Upgrade to 10.4 - general error mounting filesystems

    - by JC Denton
    Hello All, Using upgrade manager I upgraded my 8.x LTS installation to 10.4. After rebooting the system failed encountered an error and dropped into the recovery console. It appeared to be a problem caused by ureadahead as described here: http://ubuntuguide.net/howto-fix-ureadahead-problem-after-upgrading-to-ubuntu-10-04. So I renamed ureadahead.conf to ureadahead.moved (after remounting the partition rw). this did not help so I renamed the file back again. After rebooting the following error appears: ureadahead terminated with status 5. udev_monitor_new_from_netlink: error getting socket: Invalid Argument mountall:mountall.c:3204 assertion failed in main: udev_monitor = udev_monitor_new_from_netlink(udev,"udev") init: mountall main process (2532) killed by ABRT signal. General error mounting filesystems How will I get my system to boot again properly? thanks

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

  • Getting the general log to work in MySQL 5.6.8

    - by Benjamin
    I can't get the general log to work in this version of MySQL. I added the following lines to /usr/my.cnf: general_log = 1 general_log_file = "/var/log/mysql.log" Then restarted the server: [root@localhost ~]# service mysql restart Shutting down MySQL.. SUCCESS! Starting MySQL. SUCCESS! The settings seem to be taken into account: mysql> SHOW VARIABLES LIKE 'general_log%'; +------------------+--------------------+ | Variable_name | Value | +------------------+--------------------+ | general_log | ON | | general_log_file | /var/log/mysql.log | +------------------+--------------------+ 2 rows in set (0.01 sec) But the log is never created: [root@localhost ~]# mysqladmin flush-logs [root@localhost ~]# ls -al /var/log/mysql.log ls: cannot access /var/log/mysql.log: No such file or directory Any idea why?

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

  • New Oracle Database Performance and Tuning Specialization version available to all partners!

    - by Catalin Teodor
    Any partner working with the Oracle database should take advantage of the new Oracle Database Performance and Tuning specialization version. Have partners review the criteria and see what’s new in this specialization solution update. Partners should plan to take the new version as the Oracle Database 11g Performance Tuning specialization becomes non-qualifying as of August 1, 2015.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >