Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 188/670 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • ActionScript - One Button Limit (Exclusive Touch) For Mobile Devices?

    - by TheDarkIn1978
    two years ago, when i was developing an application for the iPhone, i used the following built-in system method on all of my buttons: [button setExclusiveTouch:YES]; essentially, if you had many buttons on screen, this method insured that the application wouldn't be permitted do crazy things when several button events firing at the same time. problematic: ButtonA and ButtonB are available. each button has a mouse up event which fire a specific reorganization/layout of the UI. if both button's events are fired at the same time, their events will likely conflict, causing a strange new layout, perhaps a runtime error. solution: application buttons cancel any current pending mouse up events when said button enters mouse down. private function mouseDownEventHandler(evt:MouseEvent):void { //if other buttons are currently in a mouse down state ready to fire //a mouse up event, cancel them all here. } of course it's simple to manually handle this if there are only a few buttons on stage, but managing buttons becomes more and more complicated / bug-prone if there are several / many buttons available. is there a convenience method available in AIR specifically for this functionality?

    Read the article

  • Can I upload an object in memory to FTP using Python?

    - by fsckin
    Here's what I'm doing now: mysock = urllib.urlopen('http://localhost/image.jpg') fileToSave = mysock.read() oFile = open(r"C:\image.jpg",'wb') oFile.write(fileToSave) oFile.close f=file('image.jpg','rb') ftp.storbinary('STOR '+os.path.basename('image.jpg'),f) os.remove('image.jpg') Writing files to disk and then imediately deleting them seems like extra work on the system that should be avoided. Can I upload an object in memory to FTP using Python?

    Read the article

  • Algorithm to rotate an image 90 degrees in place? (No extra memory)

    - by user9876
    In an embedded C app, I have a large image that I'd like to rotate by 90 degrees. Currently I use the well-known simple algorithm to do this. However, this algorithm requires me to make another copy of the image. I'd like to avoid allocating memory for a copy, I'd rather rotate it in-place. Since the image isn't square, this is tricky. Does anyone know of a suitable algorithm?

    Read the article

  • How to limit EditText lines to 1 by coding (ignoring enter)?

    - by Vahe Musinyan
    I am trying to ignore the enter key, but i do not want to use onKeyDown() function. There is a way to do this in xml: 1. android:maxLines = "1" 2. android:lines = "1" 3. android:singleLine = "true" I actually want to do the last one by coding. Does anyone know how to do that? for (int i=0; i<numClass; i++) { temp_ll = new LinearLayout(this); temp_ll.setOrientation(LinearLayout.HORIZONTAL); temp1 = new EditText(this); InputFilter[] FilterArray = new InputFilter[1]; FilterArray[0] = new InputFilter.LengthFilter(12); temp1.setFilters(FilterArray); // set edit text length to max 12 temp1.setHint(" class name "); temp1.setSingleLine(true); temp_ll.addView(temp1); frame.addView(temp_ll); } ll.addView(frame);

    Read the article

  • collectd does not work

    - by bery
    I have installed collectd-5.0.0 on Fedora12 server and would like to run its service for receiving data from clients. I have enabled network plugin and rddtool plugin as commented: collectd.conf in server: BaseDir "/opt/collectd/var/lib/collectd" LoadPlugin "logfile" LoadPlugin network LoadPlugin rrdtool <Plugin network> Listen "192.168.8.37" "25826" </Plugin> collectd.conf in client: LoadPlugin logfile LoadPlugin cpu LoadPlugin network LoadPlugin memory <Plugin network> Server"192.168.8.37" "25826" </Plugin> collectd.log in server: [2011-08-03 02:36:04] Exiting normally. [2011-08-03 02:36:04] rrdtool plugin: Shutting down the queue thread. [2011-08-03 02:36:04] network plugin: Stopping receive thread. [2011-08-03 02:36:04] network plugin: Stopping dispatch thread. [2011-08-03 02:37:11] Initialization complete, entering read-loop. collectd.log in client: [2011-08-02 17:31:44] Initialization complete, entering read-loop. results thst execute netstat on server: netstat -ulpn | grep 25826 udp 0 0 192.168.8.37:25826 0.0.0.0:* 4744/collectd problem: but there is noting in "/opt/collectd/var/lib/collectd/" on ser yes,I move the port number of "25826" as your propose(But I think this is the default port for coolectd).there is no rdd files recived on server. collectd.log in client collectd [2011-08-03 10:01:36] plugin_read_thread: Handling memory'. [2011-08-03 10:01:36] plugin_read_thread: Handlingcpu'. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = used; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = user; [2011-08-03 10:01:36] uc_update: uml/memory/memory-used: ds[0] = 280412160.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-user: ds[0] = 0.100008 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = buffered; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = nice; [2011-08-03 10:01:36] uc_update: uml/memory/memory-buffered: ds[0] = 344182784.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-nice: ds[0] = 0.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] network plugin: flush_buffer: send_buffer_fill = 1340 [2011-08-03 10:01:36] network plugin: network_send_buffer: buffer_len = 1340 ... [2011-08-03 10:01:36] plugin_read_thread: Next read of the cpu plugin at 1312380106.429064774. collectd.log in server collectd: [2011-08-03 20:18:08] type = network [2011-08-03 20:18:08] type = rrdtool [2011-08-03 20:18:08] network plugin: sockent_open: node = 192.168.8.37; service = 25826; [2011-08-03 20:18:08] fd = 3; calling bind' [2011-08-03 20:18:08] Done parsing/opt/collectd//share/collectd/types.db' [2011-08-03 20:18:08] interval_g = 10; [2011-08-03 20:18:08] timeout_g = 2; [2011-08-03 20:18:08] hostname_g = localhost.localdomain; [2011-08-03 20:18:08] Initialization complete, entering read-loop. It looks like, data is sending but doesn't be recived. Where is the mistake?

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • Database not completely updated in rails migration

    - by Aatish Sai
    I am new to Ruby on Rails. I have a migration called create user class CreateUsers < ActiveRecord::Migration def change create_table :users do |t| t.column :username, :string, :limit => 25, :default => "", :null => false t.column :hashed_password, :string, :limit => 40, :default => "", :null => false t.column :first_name, :string, :limit => 25, :default => "", :null => false t.column :last_name, :string, :limit => 40, :default => "", :null => false t.column :email, :string, :limit => 50, :default => "", :null => false t.column :display_name, :string, :limit => 25, :default => "", :null => false t.column :user_level, :integer, :limit => 3, :default => 0, :null => false end User.create(:username=>'test',:hashed_password=>'test',:first_name=>'test',:last_name=>'test',:email=>'[email protected]',:display_name=> 'test',:user_level=>9) end end When I run rake db:migrate the table is created with the columns as mentioned above but the test data are not there mysql>select * from users; Empty set (0.00 sec) EDIT I just dropped the whole database and restarted the migration and now it is showing the following error. rake aborted! An error has occurred, all later migrations canceled: Can't mass-assign protected attributes: username, hashed_password, first_name, last_name, email, display_name, user_level What am I doing wrong please help? Thank you.

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • Boost Netbook Speed with an SD Card & ReadyBoost

    - by Matthew Guay
    Looking for a way to increase the performance of your netbook?  Here’s how you can use a standard SD memory card or a USB flash drive to boost performance with ReadyBoost. Most netbooks ship with 1Gb of Ram, and many older netbooks shipped with even less.  Even if you want to add more ram, often they can only be upgraded to a max of 2GB.  With ReadyBoost in Windows 7, it’s easy to boost your system’s performance with flash memory.  If your netbook has an SD card slot, you can insert a memory card into it and just leave it there to always boost your netbook’s memory; otherwise, you can use a standard USB flash drive the same way. Also, you can use ReadyBoost on any desktop or laptop; ones with limited memory will see the most performance increase from using it. Please Note:  ReadyBoost requires at least 256Mb of free space on your flash drive, and also requires minimum read/write speeds.  Most modern memory cards or flash drives meet these requirements, but be aware that an old card may not work with it. Using ReadyBoost Insert an SD card into your card reader, or connect a USB flash drive to a USB port on your computer.  Windows will automatically see if your flash memory is ReadyBoost capable, and if so, you can directly choose to speed up your computer with ReadyBoost. The ReadyBoost settings dialog will open when you select this.  Choose “Use this device” and choose how much space you want ReadyBoost to use. Click Ok, and Windows will setup ReadyBoost and start using it to speed up your computer.  It will automatically use ReadyBoost whenever the card is connected to the computer. When you view your SD card or flash drive in Explorer, you will notice a ReadyBoost file the size you chose before.  This will be deleted when you eject your card or flash drive. If you need to remove your drive to use elsewhere, simply eject as normal. Windows will inform you that the drive is currently being used.  Make sure you have closed any programs or files you had open from the drive, and then press Continue to stop ReadyBoost and eject your drive. If you remove the drive without ejecting it, the ReadyBoost file may still remain on the drive.  You can delete this to save space on the drive, and the cache will be recreated when you use ReadyBoost next time. Conclusion Although ReadyBoost may not make your netbook feel like a Core i7 laptop with 6GB of RAM, it will still help performance and make multitasking even easier.  Also, if you have, say, a memory stick and a flash drive, you can use both of them with ReadyBoost for the maximum benefit.  We have even noticed better battery life when multitasking with ReadyBoost, as it lets you use your hard drive less.  SD cards and thumb drives are relatively cheap today, and many of us have several already, so this is a great way to improve netbook performance cheaply. Similar Articles Productive Geek Tips Speed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup PageAsk the Readers: What are Your Computer’s Hardware Specs?Understanding Windows Vista Aero Glass RequirementsReplace Google Chrome’s New Tab Page with Speed Dial TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • Why one loop is performing better than other memory wise as well as performance wise?

    - by Mohit
    I have following two loops in C#, and I am running these loops for a collection with 10,000 records being downloaded with paging using "yield return" First foreach(var k in collection) { repo.Save(k); } Second var collectionEnum = collection.GetEnumerator(); while (collectionEnum.MoveNext()) { var k = collectionEnum.Current; repo.Save(k); k = null; } Seems like that the second loop consumes less memory and it faster than the first loop. Memory I understand may be because of k being set to null(Even though I am not sure). But how come it is faster than for each. Following is the actual code [Test] public void BechmarkForEach_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); Profile("For Each Profiling",1,()=>{ var localenumertaor=contactService.Download(); foreach (var item in localenumertaor) { if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); } contactRepo.DeleteAll(); }); } [Test] public void BechmarkWhile_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); var itemsCollection = contactService.Download().GetEnumerator(); Profile("While Profiling", 1, () => { while (itemsCollection.MoveNext()) { var item = itemsCollection.Current; //if First time sync then ignore and overwrite the stateflag if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); item = null; } contactRepo.DeleteAll(); }); } static void Profile(string description, int iterations, Action func) { // clean up GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); // warm up func(); var watch = Stopwatch.StartNew(); for (int i = 0; i < iterations; i++) { func(); } watch.Stop(); Console.Write(description); Console.WriteLine(" Time Elapsed {0} ms", watch.ElapsedMilliseconds); } I m using the micro bench marking, from a stackoverflow question itself benchmarking-small-code The time taken is For Each Profiling Time Elapsed 5249 ms While Profiling Time Elapsed 116 ms

    Read the article

  • WebLogic Server Performance and Tuning: Part I - Tuning JVM

    - by Gokhan Gungor
    Each WebLogic Server instance runs in its own dedicated Java Virtual Machine (JVM) which is their runtime environment. Every Admin Server in any domain executes within a JVM. The same also applies for Managed Servers. WebLogic Server can be used for a wide variety of applications and services which uses the same runtime environment and resources. Oracle WebLogic ships with 2 different JVM, HotSpot and JRocket but you can choose which JVM you want to use. JVM is designed to optimize itself however it also provides some startup options to make small changes. There are default values for its memory and garbage collection. In real world, you will not want to stick with the default values provided by the JVM rather want to customize these values based on your applications which can produce large gains in performance by making small changes with the JVM parameters. We can tell the garbage collector how to delete garbage and we can also tell JVM how much space to allocate for each generation (of java Objects) or for heap. Remember during the garbage collection no other process is executed within the JVM or runtime, which is called STOP THE WORLD which can affect the overall throughput. Each JVM has its own memory segment called Heap Memory which is the storage for java Objects. These objects can be grouped based on their age like young generation (recently created objects) or old generation (surviving objects that have lived to some extent), etc. A java object is considered garbage when it can no longer be reached from anywhere in the running program. Each generation has its own memory segment within the heap. When this segment gets full, garbage collector deletes all the objects that are marked as garbage to create space. When the old generation space gets full, the JVM performs a major collection to remove the unused objects and reclaim their space. A major garbage collect takes a significant amount of time and can affect system performance. When we create a managed server either on the same machine or on remote machine it gets its initial startup parameters from $DOMAIN_HOME/bin/setDomainEnv.sh/cmd file. By default two parameters are set:     Xms: The initial heapsize     Xmx: The max heapsize Try to set equal initial and max heapsize. The startup time can be a little longer but for long running applications it will provide a better performance. When we set -Xms512m -Xmx1024m, the physical heap size will be 512m. This means that there are pages of memory (in the state of the 512m) that the JVM does not explicitly control. It will be controlled by OS which could be reserve for the other tasks. In this case, it is an advantage if the JVM claims the entire memory at once and try not to spend time to extend when more memory is needed. Also you can use -XX:MaxPermSize (Maximum size of the permanent generation) option for Sun JVM. You should adjust the size accordingly if your application dynamically load and unload a lot of classes in order to optimize the performance. You can set the JVM options/heap size from the following places:     Through the Admin console, in the Server start tab     In the startManagedWeblogic script for the managed servers     $DOMAIN_HOME/bin/startManagedWebLogic.sh/cmd     JAVA_OPTIONS="-Xms1024m -Xmx1024m" ${JAVA_OPTIONS}     In the setDomainEnv script for the managed servers and admin server (domain wide)     USER_MEM_ARGS="-Xms1024m -Xmx1024m" When there is free memory available in the heap but it is too fragmented and not contiguously located to store the object or when there is actually insufficient memory we can get java.lang.OutOfMemoryError. We should create Thread Dump and analyze if that is possible in case of such error. The second option we can use to produce higher throughput is to garbage collection. We can roughly divide GC algorithms into 2 categories: parallel and concurrent. Parallel GC stops the execution of all the application and performs the full GC, this generally provides better throughput but also high latency using all the CPU resources during GC. Concurrent GC on the other hand, produces low latency but also low throughput since it performs GC while application executes. The JRockit JVM provides some useful command-line parameters that to control of its GC scheme like -XgcPrio command-line parameter which takes the following options; XgcPrio:pausetime (To minimize latency, parallel GC) XgcPrio:throughput (To minimize throughput, concurrent GC ) XgcPrio:deterministic (To guarantee maximum pause time, for real time systems) Sun JVM has similar parameters (like  -XX:UseParallelGC or -XX:+UseConcMarkSweepGC) to control its GC scheme. We can add -verbosegc -XX:+PrintGCDetails to monitor indications of a problem with garbage collection. Try configuring JVM’s of all managed servers to execute in -server mode to ensure that it is optimized for a server-side production environment.

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • 3GB RAM Installed and Detected by BIOS, Windows Vista 32bit Only Sees 2GB

    - by Nathan Taylor
    I am attempting to install more RAM on a Windows Vista 32bit machine which is using a X6DAL-XG motherboard and the RAM amount reported in the BIOS is 3GB+, but Windows is only reporting 2GB installed. The motherboard has 6 RAM bays which I have populated with various combinations of 4 1GB sticks, and 2 512mb sticks, but no matter how I configure them Windows doesn't see more than 2GB. I realize of course 32-bit Windows has a 3gb cap on memory, but that doesn't explain why it will only report 2GB when there are in fact (currently) 5GB installed. I should think I would be able to see at least 3GB. According to the spec list for the motherboard the minimum RAM requirements are DDR333/266mhz installed in pairs. I have done this exactly, and the BIOS isn't reporting any problems at POST. RAM Configuration (according to CPU-Z): Slot #1: Kingston 128mx72D266C25 - 1024mb PC2100 (133mhz) Slot #2: Kingston KVR266X72RC25/1024 - 1024mb PC2100 (133mhz) Slot #3: PQI - 512mb PC2700 (166mhz) Slot #4: Kingston 128mx72D266C25 - 1024mb PC2100 (133mhz) Slot #5: Kingston KVR266X72RC25/1024 - 1024mb PC2100 (133mhz) Slot #6: PQI - 512mb PC2700 (166mhz) I'm not sure if memory specs above conflict with this statement in the motherboard manual or not: Memory Support The X6DAL-XG supports up to 12GB/24GB of registered ECC DDR333/266 (PC2700/PC2100) memory. The motherboard was designed to support 4GB (PC2100) modules in each slot, but only the 2GB modules have been tested. When using registered ECC DDR333 (PC2700) memory, installing four pieces of double-banked memory or six pieces of single-banked memory is supported. So, am I doing something wrong with the RAM I have now, or is there some sort of compatibility problem which I am missing? Thanks!

    Read the article

  • Bash Script - Traffic Shaping

    - by Craig-Aaron
    hey all, I was wondering if you could have a look at my script and help me add a few things to it, How do I get it to find how many active ethernet ports I have? and how do I filter more than 1 ethernet port How I get this to do a range of IP address? Once I have a few ethenet ports I need to add traffic control to each one #!/bin/bash # Name of the traffic control command. TC=/sbin/tc # The network interface we're planning on limiting bandwidth. IF=eth0 # Network card interface # Download limit (in mega bits) DNLD=10mbit # DOWNLOAD Limit # Upload limit (in mega bits) UPLD=1mbit # UPLOAD Limit # IP address range of the machine we are controlling IP=192.168.0.1 # Host IP # Filter options for limiting the intended interface. U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32" start() { # Hierarchical Token Bucket (HTB) to shape bandwidth $TC qdisc add dev $IF root handle 1: htb default 30 #Creates the root schedlar $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD #Creates a child schedlar to shape download $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD #Creates a child schedlar to shape upload $U32 match ip dst $IP/24 flowid 1:1 #Filter to match the interface, limit download speed $U32 match ip src $IP/24 flowid 1:2 #Filter to match the interface, limit upload speed } stop() { # Stop the bandwidth shaping. $TC qdisc del dev $IF root } restart() { # Self-explanatory. stop sleep 1 start } show() { # Display status of traffic control status. $TC -s qdisc ls dev $IF } case "$1" in start) echo -n "Starting bandwidth shaping: " start echo "done" ;; stop) echo -n "Stopping bandwidth shaping: " stop echo "done" ;; restart) echo -n "Restarting bandwidth shaping: " restart echo "done" ;; show) echo "Bandwidth shaping status for $IF:" show echo "" ;; *) pwd=$(pwd) echo "Usage: tc.bash {start|stop|restart|show}" ;; esac exit 0 thanks

    Read the article

  • Exchange 2003: Unrestrict send mail size for specific users / groups?

    - by Kip
    Good (insert appropriate time of day here) SF folks, I have the following situation; We have a message size limit for sending set at 20mb in Global Settings | Message Delivery. We have a limit of 50mb set at an external 3rd party spam vendor. I need to enable some users to be able to send messages that are upwards of around 40mb in size. However, when I set the Sending Message Size Maximum to 50mb within the delivery restrictions of a users exchange properties, it would appear that this does not win. It seems that the lowest value wins for this situation. I need to be able to allow certain users to send messages larger than the 20mb limit, but to have everyone else have the 20mb limit in place. How can I do this? The only way I could see was to raise the limit set in Global Settings | Message Delivery to 50mb and then set everyone elses (bar the people who need increased limit) delivery restrictions max size down. But I cannot see an easy way to do the last bit hence my post here looking for advice. There are valid reasons we need to send mail this size and whilst we are putting together other mechanisms for delivery this data, we still need to get this put in place. Thanks in advance Kip

    Read the article

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Best practice PHP Form Action

    - by Rob
    Hi there i've built a new script (from scratch not a CMS) and i've done alot of work on reducing memory usage and the time it takes for the page to be displayed (caching HTML etc) There's one thing that i'm not sure about though. Take a simple example of an article with a comments section. If the comment form posts to another page that then redirects back to the article page I won't have the problem of people clicking refresh and resending the information. However if I do it that way, I have to load up my script twice use twice as much memory and it takes twice as long whilst i'm still only displaying the page once. Here's an example from my load log. The first load of the article is from the cache, the second rebuilds the page after the comment is posted. Example 1 0 queries using 650856 bytes of memory in 0.018667 - domain.com/article/1/my_article.html 9 queries using 1325723 bytes of memory in 0.075825 - domain.com/article/1/my_article/newcomment.html 0 queries using 650856 bytes of memory in 0.029449 - domain.com/article/1/my_article.html Example 2 0 queries using 650856 bytes of memory in 0.023526 - domain.com/article/1/my_article.html 9 queries using 1659096 bytes of memory in 0.060032 - domain.com/article/1/my_article.html Obviously the time fluctuates so you can't really compare that. But as you can see with the first method I use more memory and it takes longer to load. BUT the first method avoides the refresh problem. Does anyone have any suggestions for the best approach or for alternative ways to avoid the extra load (admittadely minimal but i'd still like to avoid it) whilst also avoiding the refresh problem?

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >