Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 87/670 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How to change Blackberry initial message download limit.

    - by Mikey B
    BES 4.1.6 Blackberry 8300 (curve) Hi guys, I've noticed that handhelds will typically only retrieve the first few KB and then prompt the user to manually retrieve more (or auto-retrieve if they scroll down). The problem is that I have a BB app that needs to see the entire message all at once on the first initial time it's opened. Is there a setting on BES that will allow me to change how much data a handheld initially retrieves per message? Thanks, M

    Read the article

  • How do I limit the CPU share of TrustedInstaller.exe on a Vista system

    - by Dan Neely
    I'm trying to fix a few low end single core desktops running Vista. In normal use they're fast enough not to be a problem. The issue is that because these machines are only on when being used, primarily for school work, windowsupdate begins installing patches it launches TrustedInstaller which in turn hogs 100% of the CPU and renders the machines all but unusable for however long it takes to patch them.

    Read the article

  • Limit vsftp upload to a given set of file-names

    - by Chen Levy
    I need to configure an anonymous ftp with upload. Given this requirement I try to lock this server down to the bear minimum. One of the restrictions I wish to impose is to enable the upload of only a given set of file-names. I tried to disallow write permission to the upload folder, and put in it some empty files with write permission: /var/ftp/ [root.root] [drwxr-xr-x] |-- upload/ [root.root] [drwxr-xr-x] | |-- upfile1 [ftp.ftp] [--w-------] | `-- upfile2 [ftp.ftp] [--w-------] `-- download/ [root.root] [drwxr-xr-x] `-- ... But this approach didn't work because when I tried to upload upfile1, it tried to delete and create a new file in its' place, and there is no permissions for that. Is there a way to make this work, or perhaps use a different approach like abusing the deny_file option?

    Read the article

  • Setting a time limit for a transaction in MySQL/InnoDB

    - by Trevor Burnham
    This sprang from this related question, where I wanted to know how to force two transactions to occur sequentially in a trivial case (where both are operating on only a single row). I got an answer—use SELECT ... FOR UPDATE as the first line of both transactions—but this leads to a problem: If the first transaction is never committed or rolled back, then the second transaction will be blocked indefinitely. The innodb_lock_wait_timeout variable sets the number of seconds after which the client trying to make the second transaction would be told "Sorry, try again"... but as far as I can tell, they'd be trying again until the next server reboot. So: Surely there must be a way to force a ROLLBACK if a transaction is taking forever? Must I resort to using a daemon to kill such transactions, and if so, what would such a daemon look like? If a connection is killed by wait_timeout or interactive_timeout mid-transaction, is the transaction rolled back? Is there a way to test this from the console? Clarification: innodb_lock_wait_timeout sets the number of seconds that a transaction will wait for a lock to be released before giving up; what I want is a way of forcing a lock to be released. Update: Here's a simple example that demonstrates why innodb_lock_wait_timeout is not sufficient to ensure that the second transaction is not blocked by the first: START TRANSACTION; SELECT SLEEP(55); COMMIT; With the default setting of innodb_lock_wait_timeout = 50, this transaction completes without errors after 55 seconds. And if you add an UPDATE before the SLEEP line, then initiate a second transaction from another client that tries to SELECT ... FOR UPDATE the same row, it's the second transaction that times out, not the one that fell asleep. What I'm looking for is a way to force an end to this transaction's restful slumber.

    Read the article

  • Oracle Database In-Memory Launch Featuring Larry Ellison – June 10

    - by Roxana Babiciu
    For more than three-and-a-half decades, Oracle has defined database innovation. With our market-leading technologies, customers have been able to out-think and out-perform their competition. Soon they will be able to do that even faster. At a live launch event and simultaneous webcast, Larry Ellison will reveal the future of the database. Promote this strategic event to customers. Registration for the live event begins at 9am PT.

    Read the article

  • Oracle Database In-Memory Launch Featuring Larry Ellison – June 10

    - by Cinzia Mascanzoni
    For more than three-and-a-half decades, Oracle has defined database innovation. With our market-leading technologies, customers have been able to out-think and out-perform their competition. Soon they will be able to do that even faster. At a live launch event and simultaneous webcast, Larry Ellison will reveal the future of the database. Promote this strategic event to partners and customers. Registration for the live event begins at 5pm GMT, 6pm CET.

    Read the article

  • rTorrent, too low memory usage !?

    - by Claudiu
    I want to know from more experienced rTorrent users how to tweak the .rtorrent.rc so that rTorrent will cache disk reading and writing (same as uTorrent does). I have set the max_memory_usage = 1GB but this amount is not used. I run 6 rTorrent instances on a Quad Core, 8 GB Ram machine and total used memory reported by htop is only ~500MB. I need to use memory buffers cause disk IO activity is very high.

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Amazon EC2 High-Memory Extra Large Instance

    - by Simpanoz
    I am new to Mongodb and EC2. If I use following single MongoDb server : High-Memory Extra Large Instance 17.1 GiB memory, 6.5 ECU (2 virtual cores with 3.25 EC2 Compute Units each), 420 GB of local instance storage, 64-bit platform As a layman, if we quantify I/O, data in MB/sec. How much I/O transactions mongodb server can handle easily, without being burnt out. Consider default settings of EC2 server with Ubuntu and MongoDb version available in AWS marketplace. Thanks in advance

    Read the article

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • Limit vsftpd upload to a given set of file-names

    - by Chen Levy
    I need to configure an anonymous ftp with upload. Given this requirement I try to lock this server down to the bear minimum. One of the restrictions I wish to impose is to enable the upload of only a given set of file-names. I tried to disallow write permission to the upload folder, and put in it some empty files with write permission: /var/ftp/ [root.root] [drwxr-xr-x] |-- upload/ [root.root] [drwxr-xr-x] | |-- upfile1 [ftp.ftp] [--w-------] | `-- upfile2 [ftp.ftp] [--w-------] `-- download/ [root.root] [drwxr-xr-x] `-- ... But this approach didn't work because when I tried to upload upfile1, it tried to delete and create a new file in its' place, and there is no permissions for that. Is there a way to make this work, or perhaps use a different approach like abusing the deny_file option?

    Read the article

  • Apache strace to hunt down a memory leak

    - by Zipp
    We have a server with a memory issue: the server keeps allocating itself memory and doesn't release it. We're running Apache. I set MaxReqsPerClient to a really low value just so the threads don't hold a lot of memory, but has anyone seen calls like this? Am I wrong in thinking that it's probably Drupal pulling too much data back from the cache in DB? read(52, "h_index\";a:2:{s:6:\"weight\";i:1;s"..., 6171) = 1368 read(52, "\";a:2:{s:6:\"author\";a:3:{s:5:\"la"..., 4803) = 1368 read(52, ":\"description\";s:19:\"Term name t"..., 3435) = 1368 read(52, "abel\";s:4:\"Name\";s:11:\"descripti"..., 2067) = 1368 read(52, "ions\";a:2:{s:4:\"form\";a:3:{s:4:\""..., 16384) = 708 brk(0x2ab554396000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f653000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f753000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f853000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f953000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55fa53000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55fb53000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55fc53000 poll([{fd=52, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout) write(52, "d\0\0\0\3SELECT cid, data, created, "..., 104) = 104 read(52, "\1\0\0\1\5E\0\0\2\3def\23drupal_database_nam"..., 16384) = 1368 read(52, ";s:11:\"granularity\";a:5:{s:4:\"ye"..., 34783) = 1368 read(52, ":4:\"date\";}s:9:\"datestamp\";a:9:{"..., 33415) = 1368 read(52, "\";i:0;s:15:\"display_default\";i:0"..., 32047) = 1368 read(52, "e as an integer value.\";s:8:\"set"..., 30679) = 1368 read(52, "label' pairs, i.e. 'Fraction': 0"..., 29311) = 1368 top (the procs just keep growing in memory..): 12845 apache 15 0 581m 246m 37m S 0.0 4.1 0:17.39 httpd 12846 apache 15 0 571m 235m 37m S 0.0 4.0 0:12.13 httpd 12833 apache 15 0 420m 117m 37m S 0.0 2.0 0:06.04 httpd 12851 apache 15 0 412m 113m 37m S 0.0 1.9 0:05.32 httpd 13871 apache 15 0 409m 109m 37m S 0.0 1.8 0:04.90 httpd 12844 apache 15 0 407m 108m 37m S 0.0 1.8 0:04.50 httpd 13870 apache 15 0 407m 108m 37m S 0.3 1.8 0:03.50 httpd 14903 apache 15 0 402m 103m 37m S 0.3 1.7 0:01.29 httpd 14850 apache 15 0 397m 100m 37m S 0.0 1.7 0:02.08 httpd 14907 apache 15 0 390m 93m 36m S 0.0 1.6 0:01.32 httpd 13872 apache 15 0 386m 91m 37m S 0.0 1.5 0:03.13 httpd 12843 apache 15 0 373m 81m 37m S 0.0 1.4 0:02.51 httpd 14901 apache 15 0 370m 75m 33m S 0.0 1.3 0:00.78 httpd 14904 apache 15 0 335m 29m 15m S 0.0 0.5 0:00.26 httpd

    Read the article

  • crunchbang: it takes up *how* much memory?!?!

    - by Theo Moore
    I've been trying many distros of Linux lately, trying to find something I like for my netbook. I started out with Ubuntu, and I can tell you I am a big fan. Ubuntu is now fast to install, much simpler to administer, and pretty light resource-wise. My original install was the standard 32 bit version of 9.04. I tried the netbook remix version of this release, but it was very, very slow. Even the full-blown version used only about 200mb. Much better than the almost 800 that the recommended Windows y version took. Once the newest release of Ubuntu was released, I decided to try the netbook remx of 10.04. It used even less RAM; only about 150mb. I thought I'd found my OS. I certainly settled in and prepared to use it forever. Then, someone I know suggested I try cunchbang. It is the most minimalistic UI I've ever seen, using Openbox rather than Gnome or KDE. Very slick, simple and clean. Since I am using the alpha of the most recent version (using Debian Squeeze), the apps provided for you are few...although more will be provided soon. You do have a word processor, etc., although not the OpenOffice you would normally get in Ubuntu. But the best part? 48MB. That's it. 48mb fully loaded, supporting what I can "hotel services". It's fast, boots quick, and believe it or not, I can even do Java-based development....on my netbook! Pretty slick.   More on it as I use it.

    Read the article

  • PHP-FPM Pool, Child Processes and Memory Consumption

    - by Jhilke Dai
    In my PHP-FPM configuration I have 3 Pools, the eg: Config is: ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 1 ; ;;;;;;;;;;;;;;;;;;;;;;; [www1] user = www group = www listen = /tmp/php-fpm1.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 2 ; ;;;;;;;;;;;;;;;;;;;;;;; [www2] user = www group = www listen = /tmp/php-fpm2.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 3 ; ;;;;;;;;;;;;;;;;;;;;;;; [www3] user = www group = www listen = /tmp/php-fpm3.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 I calculated the pm.max_children processes according to some example calculations on the web like 40 x 40 Mb = 1600 Mb. I have separated 4 GB of RAM for PHP, now according to the calculations 40 Child Processes via one socket, and I have total of 3 sockets in my Nginx and FPM configuration. My doubt is about the amount of memory consumption by those child processes. I tried to create high load in the server via httperf hog and siege but I could not calculate the accurate memory usage by all the PHP processes (other processes like MySQL and Nginx were also running). And all the sockets were in use, So, I seek guidance from anyone who have done this before or know how exactly the pm.max_children in PHP Works. Since I have 3 Pools/sockets with 40 child processes does that count to 3 x 40 x 40 Mb of Memory usage ? or it is just like 40 Max. Child processes sharing 3 sockets (and the total memory usage is just 40 x 40 Mb) ?

    Read the article

  • Limit download usage for clients

    - by Kumar P
    i am maintaining few windows xp machines under rhel 5 . i want to set quota for download file size. How to do it ? I mean, in lan usar A's maximum donload file size is 300 MB , and user B's maximum download file size in 200 MB. I want to block downloading when user try to download more than 300 MB file.User should not allow to download 300MB file at a time. Or how to set quota for maximum download per day, is there possible to do it ? How can i do this ?

    Read the article

  • Google Chrome sync: limit for bookmarks & extensions?

    - by Lyubomyr Shaydariv
    Actually, Chrome is my favorite web-browser, and one of its most powerful features is synchronizing the actual data into a Google account. For the last years I gained a lot of bookmarks and from time to time browse the extensions gallery to find new valuable ones. Really, synchronizing between my work and home PC's freed me from manual sync. And for the recent months I experience strange glitches. I guess it may be caused by a lot of stored bookmarks (potentially about 3K [in estimate], but please don't ask why :)) and extensions (about 130 installed but only 10-15 daily used). I can mention the following strange things: Recently added bookmarks sometimes are not synchronized (e.g. I put a bookmark at work, but it's not guaranteed I can see it that evening), despite about:sync indicates a good sync process. Sometimes recently modified bookmarks appear in either (let's call) last at home or last at work bookmark folders. Sometimes bookmarks are not synced at all. (Moreover, Chromium versions may even crash) Extensions are not synced now at all. Perhaps, there's another reason, but Google Mail Checker and Google Reader Notifier do not show indicators of incoming e-mails and news. ... I'm not sure but it looks like I might exceed Chrome internal sync limits... Is it right? Are there any workarounds, or should I make a massive bookmarks/extensions cleanup (I really don't want it :()? I mostly use Google Chrome Canary builds, and the my current one is 12.0.732.0. Thanks in advance. Update #1 (2001-04-19): I removed about 50 extensions that I'm not interested in (or that I consider as trash), and gained pretty some results: The extensions count is below 100 (exactly 97); The chrome://extensions page does not get slow (or even frozen) any more on enabling/disabling/uninstalling extensions; The extensions are seem to be synchronized now again.

    Read the article

  • vmware esx licensing limit on vCPUs per VM

    - by maruti
    when a server has more than 8 cores per CPU (total 16 logical procs) and ESX standard license is applied, what does it mean for VM performance? Since each VM on host is allowed only 4 vCPUs max VMWare ESX/ESXi limits the no of vCPUs per guest VM depending on the license: standard Lic = 4 vCPU Advanced Lic = 4 since i dont know exact number is there need to upgrade to Advanced version for any perf benefits if none of VMs have workloads that need more than 4 vCPUs?

    Read the article

  • Out of memory error when enabling openX Market plugin

    - by Jeremy Pippin
    We're trying to enable the openX Market plugin on openX 2.8.9. Enabling the plugin results in an "Allowed memory exceeded" error: Fatal error: Allowed memory size of 201326592 bytes exhausted (tried to allocate 76 bytes) in /home/openx/lib/pear/PEAR.php on line 868 no matter what we have our php memory_limit set to. It even exceeded 512MB. We're running RHEL 5.6 and PEAR 1.9.4 Has anybody else come across this problem?

    Read the article

  • How to memory test in Linux?

    - by sasayins
    Hi, I'm planning to test my Linux box and I want to start in memory testing. But my problem is what should I need to test the memory in my linux box? Should I need a tool? Or there are some APIs to use to build some scripts? Thanks

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Limit of dvd rom

    - by user23950
    I have lite on dvd rom. And I'm going to copy lots of files from maybe 40-70 dvd's. And I'm using this dvd rom for about 4-7 months now. And I have also burned lots of dvd's. and I've copied the 2nd dvd. But the rom is making sound. A sound that I do not hear frequently. Does it depend on the dvd's that I'm reading or my dvd rom is getting old. How many dvd's do you think the rom can copy without threatening its health

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >