Search Results

Search found 7217 results on 289 pages for 'jboss cache'.

Page 77/289 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Hash Sum Mismatch using preseed (Ubuntu Server 12.04)

    - by xorma
    My install through Preseed fails at around 80% on Select and Install Software. In VT-4, I can see Hash Sum mismatch errors. This may be because I am going through a firewall which is caching files. There is no-cache option for apt but I can't seem to get it to work with Preseed. Have tried: d-i debian-installer/no-cache string true d-i apt-setup/no-cache boolean true d-i preseed/early_command string mkdir -p /target/etc/apt/apt.conf.d; echo "Acquire::http {No-Cache=True;};" > /target/etc/apt/apt.conf.d/no-cache but none of these are working. It appears that the early_command occurs too early so is over written once install starts. I'm not sure if the other commands are even correct. Anyone know what is the correct way of disabling achieving this through Preseed?

    Read the article

  • Local Data Cache - How do I refresh the local db when I add fields to remote db?

    - by Chu
    I'm using a Local Data Cache in an ASP.NET 3.5 environment. I made a change in my main database by adding a new field. I double click on my .SYNC file in my project to startup the Local Data Cache wizard again. The wizard starts and I click OK with the hopes that it'll re-query my database and add the new field to the local database file. Instead, I get an error saying "Synchronizing the databae failed with the message: Unable to enumerate changes at the DbServerSyncProvider..." The only way I know to get things working again is to delete the .SYNC file along with the local database and start it from scratch. There's got to be an easier way... anyone know it?

    Read the article

  • Better performance to Query the DB or Cache small result sets?

    - by user169867
    Say I need to populate 4 or 5 dropdowns w/ items from a database. Each drop down will have < 15 items in it. These items almost never change. Now I could query the DB each time the page is accessed or I could grab the values from a custom class that would check to see if they already exist in ASP.Net's cache and only if they don't query the DB to update the cache. It's trivial for me to write but I'm unsure if the performace would be better or not. I think it would be (although not likely anything huge). What do you think?

    Read the article

  • Observer not clearing cache in Rails 2.3.2 - please help.

    - by Jason
    Hi, We are using Rails 2.3.2, Ruby 1.8 & memcache. In my Posts controller I have: cache_sweeper Company::Caching::Sweepers::PostSweeper, :only => [:save_post] I have created the following module: module Company module Caching module Sweepers class PostSweeper < ActionController::Caching::Sweeper observe Post def after_save(post) Rails.cache.delete("post_" + post.permalink) end end end end end but when the save_post method is invoked, the cache is never deleted. Just hoping someone can see what I am doing wrong here. Thanks.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • How Do I Rollout WP-Cache To 1000 WordPress Blogs?

    - by Volomike
    My client has 1000 WordPress blogs hosted on a server for customers. Each one is in its own domain through cpanel and SuPHP, running in CGI mode on Apache2.2. Now he wants me (I'm the PHP programmer) to get WP-Cache loaded out on each of these blogs and not just activated, but enabled. He also wants the timeout value set to 2 days instead of the default setting. I have root on LAMP. What is the preferred way to roll out an update to each blog such that on a page view, it sees if WP-Cache is enabled or not. If not, it needs to copy it out from a central source, activate it, and then enable it along with the different timeout value being used.

    Read the article

  • Can a proxy server cache SSL GETs? If not, would response body encryption suffice?

    - by Damian Hickey
    Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.

    Read the article

  • Basic memcached question

    - by Aadith
    I have been reading up on distributed hashing. I learnt that consistent hashing is used for distributing the keys among cache machines. I also learnt that, a key is duplicated on mutiple caches to handle failure of cache hosts. But what I have come across on memcached doesn't seem to be in alignment with all this. I read that all cache nodes are independent of each other and that if a cache goes down, requests go to DB. Theres no mention of cache miss on a host resulting in the host directing the request to another host which could either be holding the key or is nearer to the key. Can you please tell me how these two fit together? Is memcached a very preliminary form of distributed hashing which doesnt have much sophistication?

    Read the article

  • J2EE/EJB + service locator: is it safe to cache EJB Home lookup result ?

    - by Guillaume
    In a J2EE application, we are using EJB2 in weblogic. To avoid losing time building the initial context and looking up EJB Home interface, I'm considering the Service Locator Pattern. But after a few search on the web I found that event if this pattern is often recommended for the InitialContext caching, there are some negative opinion about the EJB Home caching. Questions: Is it safe to cache EJB Home lookup result ? What will happen if one my cluster node is no more working ? What will happen if I install a new version of the EJB without refreshing the service locator's cache ?

    Read the article

  • In Linux, how can I completely disregard the contents of /etc/ld.so.cache?

    - by BillyBBone
    Hi, For the purposes of prototyping a new set of shared libraries in a development sandbox (to which I don't have root access), I'd like to know how to execute a binary while completely overriding the contents of /etc/ld.so.cache, so that none of the system libraries get loaded. How can this be done? I have looked at mechanisms like setting the LD_LIBRARY_PATH environment variable or launching the program wrapped inside /lib/ld-linux.so, but these methods all seem to supplement the loading of libraries from /etc/ld.so.cache, but not override it completely. Help?

    Read the article

  • Hold most of the object in cache/memory insted of database?

    - by feiroox
    Hi All, It just occurred to me why not to have most of the objects in a cache(memory) when an application start. if it's not that large web application. Or to have a settings for how much I want to put in the cache/memory. I just guess it could require to have something like below 1 GB RAM or a lot less. Everything in order to speed up the application even more by not querying database. Is it good idea?

    Read the article

  • how to force drupal function to not use DB cache?

    - by Alaa
    Hi All, i have a module and i am using node_load(array('nid' = arg(1))); now the problem is that this function keep getting its data for node_load from DB cache. how can i force this function to not use DB cache? Example my link is http://mydomain.com/node/344983 now: $node=node_load(array('nid'=arg(1)),null,true); echo $node-nid; output 435632 which is a randomly node id (available on the system) and everytime i ctrl+F5 my browser i get new nid!! Thanks for your help

    Read the article

  • Tell browsers to cache until last modified date changes?

    - by Chad Johnson
    My web site consists of static HTML files which are usually republished once per day, and sometimes more. I'm using Apache. In the vhost settings for my site, I'd like to tell browsers to cache HTML files indefinitely, until Apache sees that they are modified. So as soon as an HTML file is changed, Apache should immediately begin telling browsers it's changed and send the updated file. As soon as a new file is published, browsers should immediately begin receiving that...they should never receive old versions of files. Maybe ExpiresByType text/html modification and no "plus x days." Is something like this possible?

    Read the article

  • Hide Drive / Avoid Low Diskspace Warning on ReadyBoost Cache?

    - by Simon Richter
    I've just added an SSD as a ReadyBoost cache drive, and have two minor cosmetic issues with it: the drive still shows up in the drive list I get a warning balloon every five minutes that the drive is full and that I should empty the Recycle Bin. The former is ignorable (and I guess I can solve it with a group policy); the latter is somewhat going on my nerves. Are there official buttons "hide ReadyBoost drives" and "do not warn on low diskspace for ReadyBoost drives" somewhere that I may have missed? If not, I guess I can use the group policy to hide the drive; I'd still need a way for the system to not warn about the drive being full. Also, am I right that I need to assign a drive letter and format the drive with NTFS to use it for ReadyBoost, or is there a way to just use the raw device?

    Read the article

  • Cannot get nscd to run. DNS cache stale as a result

    - by Phunt
    I'm trying to troubleshoot an issue on a MediaTemple server (running CentOS5) where the DNS cache has grown stale - I think because nscd has crashed. I've tried restarting nscd: # service nscd restart Stopping nscd: [FAILED] Starting nscd: [ OK ] This makes sense since I believe nscd has crashed so it shouldn't already be running, but When I view the status of nscd: # service nscd status nscd dead but subsys locked And ps -A returns no processes related to nscd (I assume because it's dead). I've edited /etc/nscd.conf and uncommented the line that defines the location for the log file. It created the file but it never writes anything to it. I tried looking at the init script but found that it's no help since the script thinks everything is running fine - the service returns that it started up correctly. How do I 'unlock' the subsys that nscd is complaining about?

    Read the article

  • How do I purge or empty Windows Explorer's network username and sharename cache?

    - by Abel
    While troubleshooting a Samba vs Windows Network issue, I noticed that Windows' Explorer remembers login credentials of remote shares, even if you ask it not to. For instance, after accessing a share using \\servername\sharename plus entering username/password and then closing Windows Explorer, adding the same share as a network drive gives the following message, regardless whether the username is the same or not: The network folder specified is currently mapped using a different user name and password. To connect using a different user name and password, first disconnect any existing mappings to this network share. Using NET USE does not show the share. After restarting the computer, I have no problems accessing the share using different credentials. But restarting just for testing other credentials is annoying, esp. while troubleshooting. How can I purge this cache, using Windows Vista? Note: using nbtstat -R[R], ipconfig /renew, killing explorer.exe or disabling / re-enabling the network card didn't help.

    Read the article

  • Caching for database questions.

    - by SeanD
    When we say caching like using memcahe or Redis, is this a 1:1 caching between the user and the cache or can we cache 1 item and use it for all user? Some items like a Friend list will be 1:1 a that is unique per user. But if i want to cache the auto complete list for city lookups which can be used by any user, will it just store 1 list in the cache used by all users at same time or doe it need to store 1 list per user? Is it possible to cache the entire database, all the lookups, all the users, all their photos, etc using memache or redis? So from the above example: a friend list will be cleared from the cache when the user logs off. But something like city auto complete will stay in the cache 24-7-365, am i correct?

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Eclipselink problem in a javase project

    - by arinte
    I am getting this error when I am running my eclipselink project. [EL Warning]: 2008.12.05 11:47:08.056--java.lang.NoClassDefFoundError: org/jboss/resource/adapter/jdbc/ValidConnectionChecker was thrown on attempt of PersistenceLoadProcessor to load class com.mysql.jdbc.integration.jboss.MysqlValidConnectionChecker. The class is ignored. Exception in thread "main" java.lang.NoClassDefFoundError: org/jboss/resource/adapter/jdbc/ValidConnectionChecker at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.access$000(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.eclipse.persistence.internal.jpa.deployment.PersistenceUnitProcessor.loadClass(PersistenceUnitProcessor.java:261) at org.eclipse.persistence.internal.jpa.metadata.MetadataProcessor.initPersistenceUnitClasses(MetadataProcessor.java:247) at org.eclipse.persistence.internal.jpa.metadata.MetadataProcessor.processEntityMappings(MetadataProcessor.java:422) at org.eclipse.persistence.internal.jpa.deployment.PersistenceUnitProcessor.processORMetadata(PersistenceUnitProcessor.java:299) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:830) at org.eclipse.persistence.internal.jpa.deployment.JPAInitializer.callPredeploy(JPAInitializer.java:101) at org.eclipse.persistence.internal.jpa.deployment.JPAInitializer.initPersistenceUnits(JPAInitializer.java:149) at org.eclipse.persistence.internal.jpa.deployment.JPAInitializer.initialize(JPAInitializer.java:135) at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactory(PersistenceProvider.java:104) at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactory(PersistenceProvider.java:64) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:60) at Main.main(Main.java:32) Caused by: java.lang.ClassNotFoundException: org.jboss.resource.adapter.jdbc.ValidConnectionChecker at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) ... 24 more Any ideas? Why would eclipselink need something from jboss? What jboss jar do I need. I would use open jpa, but for some reason after quit a few persists from my app it starts giving a bunch of stackoverflow errors.

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Is JavaEE really portable?

    - by Bozho
    I'm just implementing a JavaEE assignment I was given on an interview. I have some prior experience with EJB, but nothing related to JMS and MDBs. So here's what I find through the numerous examples: application servers bind their topics and queues to different JNDI names - for example topic/queue, jms the activationConfig property is required on JBoss, while in the Sun tutorial it is not. after starting my application, jboss warns me that my topic isn't bound (it isn't actually - I haven't bound it, but I expect it to be bound automatically - in fact, in an example for JBoss 4.0 automatic binding does seem to happen). A suggested solution is to map it in some jboss files or even use jboss-specific annotations. This might be just JBoss, but since it is certified to implement to spec, it appears the spec doesn't specify these these things. And there all the alleged portability vanishes. So I wonder - how come it is claimed that JavaEE is portable and you can take an ear and deploy it on another application server and it magically runs, if such extremely basic things don't appear to be portable at all. P.S. sorry for the rant, but I'm assume I might be doing/getting something wrong, so state your opinions.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >