Search Results

Search found 904 results on 37 pages for '4096'.

Page 21/37 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • Dovecot and StartSSL problems with issuer

    - by knoim
    I am using dovecot (1) and trying to get my StartSSL certificate running. ssl_key_file points to my private key I tried pointing ssl_cert_file to my public key, with and without using the class1 certificate from http://www.startssl.com/certs/sub.class1.server.ca.pem as ssl_ca_file aswell as combing them with cat publickey sub.class1.server.ca.pem chained My mail client keeps telling me the certificate has no issuer, but doing openssl x509 on my public certificate tells me it is C=IL, O=StartCom Ltd., OU=Secure Digital Certificate Signing, CN=StartCom Class 1 Primary Intermediate Server CA My option for the CSR were: openssl req -new -newkey rsa:4096 -nodes Dovecot's log doesn't mention any problems. EDIT: Doesn't seem to be a problem with dovecot. I am having the same problem with postfix. openssl verify gives me the same error.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • www-data can upload a file but cant move it after the upload action

    - by user70058
    I am currently running Apache and PHP on Ubuntu. I have a page where a user is supposed to upload a profile image. The action on the backend is supposed to work like this: Upload file to user directory -- WORKS! Refer to the uploaded file and create a thumbnail in directory thumbs -- DOES NOT WORK www-data has write access to directory thumbs. My guess is that www-data for some reason does not have proper access to the file that was uploaded. UPLOADED FILE PERMISSIONS -rw-r--r-- 1 www-data www-data 47057 Feb 8 23:24 0181c6e0973eb19cb0d98521a6fe1d9e71cd6daa.jpg THUMBS DIRECTORY PERMISSIONS drwxr-sr-x 2 www-data www-data 4096 Feb 8 23:23 thumbs Im at lost here. I'm new to Ubuntu as well. Any help would be greatly appreciated!

    Read the article

  • Service haproxy error

    - by user128296
    I want to configure Haproxy for outgoing mail load balancing. my configuration file /etc/haproxy.cfg is. global maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode tcp listen smtp_proxy 199.83.95.71:25 mode tcp option tcplog balance roundrobin # Load Balancing algorithm ## Define your servers to balance server r23.lbsmtp.org 74.117.x.x:25 weight 1 maxconn 512 check server r15.lbsmtp.org 199.71.x.x:25 weight 1 maxconn 512 check And when i start service haproxy i get this error. Starting HAproxy: [ALERT] 244/172148 (7354) : cannot bind socket for proxy smtp_proxy. Aborting. Please tell me where i am doing mistake.help will appreciated.

    Read the article

  • New users' directories owned by root

    - by dotancohen
    On a CentOS server running Plesk, new users are added for each new domain. The users' home directories are in /var/www/vhosts/. New users' home directories are owned by root, and need to have an admin with root access come in and chown them: dotan@sh2:~$ echo $HOME /var/www/vhosts/someDomain.com dotan@sh2:~$ pwd /var/www/vhosts/someDomain.com dotan@sh2:~$ touch testFile touch: cannot touch `testFile': Permission denied dotan@sh2:~$ ls -la ../ | grep someDomain drwxr-xr-x 13 root root 4096 2012-08-07 19:47 someDomain.com dotan@sh2:~$ whoami dotan dotan@sh2:~$ chown dotan /var/www/vhosts/someDomain.com chown: changing ownership of `/var/www/vhosts/someDomain.com': Operation not permitted dotan@sh2:~$ Why might the new users' directories be owned by root, and how might we fix this? Thanks.

    Read the article

  • mini-dinstall chmod 0600 changes file: Operation not permitted

    - by V. Reileno
    I'm getting "Operation not permitted" in the mini-dinstall.log everytime a new debian package has been uploaded on the custom debian repository using dput. The deb file is installed successfuly but the changes file remains in the incoming folder. I can not use a post-install script when the changes file can not be processed. How can I fix this problem? Traceback (most recent call last): File "/usr/bin/mini-dinstall", line 780, in install retval = self._install_run_scripts(changefilename, changefile) File "/usr/bin/mini-dinstall", line 826, in _install_run_scripts do_chmod(changefilename, 0600) File "/usr/bin/mini-dinstall", line 193, in do_chmod do_and_log('Changing mode of "%s" to %o' % (name, mode), os.chmod, name, mode) File "/usr/bin/mini-dinstall", line 176, in do_and_log function(*args) OSError: [Errno 1] Operation not permitted: '/srv/debian-repository/mini-dinstall/incoming/debian-repository_1.3_amd64.changes' The mini-dinstall permissions: ls -lad incoming/ drwxrws--- 2 mini-dinstall debian-repository-uploader 4096 Jun 6 11:45 incoming/ ls -la incoming/debian-repository_1.3_amd64.changes -rw-rw---- 1 uploader-user debian-repository-uploader 1322 Jun 6 11:43 incoming/debian-repository_1.3_amd64.changes groups uploader-user uploader-user : uploader-user adm users debian-repository debian-repository-uploader puppet-client-updater groups mini-dinstall mini-dinstall : mini-dinstall debian-repository-uploader Cheers and thanks V.

    Read the article

  • Root certificate authority works windows/linux but not mac osx - (malformed)

    - by AKwhat
    I have created a self-signed root certificate authority which if I install onto windows, linux, or even using the certificate store in firefox (windows/linux/macosx) will work perfectly with my terminating proxy. I have installed it into the system keychain and I have set the certificate to always trust. Within the chrome browser details it says "The certificate that Chrome received during this connection attempt is not formatted correctly, so Chrome cannot use it to protect your information. Error type: Malformed certificate" I used this code to create the certificate: openssl genrsa -des3 -passout pass:***** -out private/server.key 4096 openssl req -batch -passin pass:***** -new -x509 -nodes -sha1 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf If the issue is NOT that it is malformed (because it works everywhere else) then what else could it be? Am I installing it incorrectly? Update I tried changing the certificate attributes, but to no avail: openssl genrsa -des -passout pass:***** -out private/server.key 2048 openssl req -batch -passin pass:***** -new -x509 -nodes -sha256 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf

    Read the article

  • Ubuntu 12.04 3.2 Kernel showing incorrect "cache size" in cpuinfo?

    - by Tom G
    2x Xeon E5620 . 16 Cores altogether. /proc/cpuinfo shows cache is only @ 4096kb According to intel this should have 12MB of "smart cache". Doing searched for E5620 and CPUinfo shows the correct number: cache size : 12288 KB However mine shows this: processor : 15 v endor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0x1 cpu MHz : 2400.104 cache size : 4096 KB fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx bogomips : 4800.20 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual Any idea on why it shows like it's missing 8MB in CPU cache? .

    Read the article

  • FTP Error 550 when trying to access a folder via symbolic link

    - by OrangeTux
    I'm configuring svftp on a linux machine. At the moment local users can login via ftp and they will see listened their home dir. They have write acces to it. No I want the users to write in de /var/www/ dir. Therefore I created an new group apache. Added users to the group and gave the group write access to /var/www. Via the terminal all users can write .var/www/. I created a link in the home directory to /var/www via ln -s /var/www/ /home/user/www ls gives: drwxr-xr-x 2 orangetux orangetux 4096 Jun 23 15:06 ftp lrwxrwxrwx 1 orangetux orangetux 21 Jun 23 15:00 www -> /var/www/ But when I use FTP I see the link but I cannot follow it. Error 550 which means file not found or bad access. How can I solve this, so that the users have access to /var/www via their home dir?

    Read the article

  • NFS compound failed for server foosrv: error 7 (RPC: Authentication error)

    - by automatthias
    I'm setting up an Ubuntu NFS server with a Solaris 10 client. The basic configuration looks okay to me, and it was also working for some time. I'm getting an "RPC: Authentication error" message on the client. server /etc/exports: /export/opencsw-future 192.168.3.0/24(rw,nohide,insecure,no_subtree_check,async) /export/opencsw-current 192.168.3.0/24(rw,nohide,insecure,no_subtree_check,async) $ ls -ld /export/opencsw-current drwxr-xr-x 7 maciej maciej 4096 2012-02-05 14:55 /export/opencsw-current client $ grep opencsw /etc/vfstab foosrv:/opencsw-current - /export/opencsw-current nfs - yes - $ sudo mount /export/opencsw-current NFS compound failed for server foosrv: error 7 (RPC: Authentication error) (...repeated...) nfs mount: mount: /export/opencsw-current: Permission denied My server host name resolves to both IPv4 and IPv6 addresses.

    Read the article

  • /usr/bin/mandb: can't search directory

    - by tfe
    Today I got this email from my debian server: test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) /etc/cron.daily/man-db: /usr/bin/mandb: can't search directory /usr/local/share/man/man1/: Permission denied Can me tell somebody what does it mean? I didn't change any permissions: drw---S--- 2 root staff 4096 Jun 28 14:05 man1 P.S Directory /usr/local/share/man/man1 contains 1 file: csf.1. Yesterday (Jun28) CSF/LFT was updated automatically. How do I fix this problem?

    Read the article

  • I can't write to a folder which I'm a member of

    - by user3265472
    I'm trying to setup folder access to a group so that all members of that group can create/edit/delete files within the folder. # create my group and add a member sudo addgroup dev sudo adduser martyn dev Now, logged in as "martyn", check my user has been added to "dev" group groups martyn martyn : martyn dev Now I want to change the group ownership of my project folder so all members of that group can edit it and files/folders within it. sudo chgrp -R dev myproject Just to check: martyn@localhost:/var/www$ ls -l total 4 drwxrwxr-x 3 dev dev 4096 May 31 15:53 myproject Now here's where it fails. I want to create a file within myproject (logged in as "martyn", a member of "dev"): vi myproject/test ..but when I try to save the file I get the following error: "myproject/test" E212: Can't open file for writing Why, as user "martyn" which is a member of "dev", can I not write this file? Even if I create the file so it exists, change the ownership to "dev" then try to edit and save - I get the same error.

    Read the article

  • ext4 loopback device, Buffer I/O Error on reboot

    - by cvb
    I am trying to mount a loopback device on my ext4 formatted ssd drive. I get these errors when I reboot on Linux kernel 2.6.38.8 Buffer I/O error on device loop1, logical block 0 Here is what I do: dd if=/dev/zero of=/mnt/s/lodev bs=4096 count=250000 mkfs.ext4 /mnt/s/lodev mount -n -o loop,rw /mnt/s/lodev /mnt/test The loopback mount is successful, but on reboot I get errors as mentioned above. Even mouting with 'sync','data=writeback' does not help. I tried to losetup a device, but see the same behavior. I also reformatted the base device and created the loopback device and mounted as above, I still see these errors. I do not see them when I format them as vfat. Appreciate any suggestions on this problem.

    Read the article

  • group write permission ignored in ubuntu

    - by NorthPole
    Its probably my stupidy here but i'm stuck on this and would appreciate the help. I want my user to have full access to the local apache root folder, and i also want the apache to have full access to the same folder. What i did was create a new group called DevGroup and i added my username and www-data there. also i changed the permissions to 770 to allow full group access but now it wont allow me or the apache any kind of access to the folder. here is what i get with ls drwxrwx--- 12 root DevGroup 4096 Sep 27 17:34 testFolder which seems perfect but when i try as a user to access the file i get this var/www$ ls testFolder/ ls: cannot open directory testFolder/: Permission denied also when i try to access the a page in the folder from browser [Thu Sep 27 17:47:16 2012] [error] [client 127.0.0.1] PHP Fatal error: Unknown: Failed opening required '/var/www/testFolder/foo.php' (include_path='.:/usr/share/php:/usr/share/pear') in Unknown on line 0

    Read the article

  • APC has no system cache entries

    - by lazzio
    I have 2 web servers to provide PHP websites. One server is : Apache + PHP-FPM + APC The other : Apache with MPM-itk + APC. For both of these servers, APC has no cache system entries but only users cache entries as you can see on the screenshot. APC with only users cache entries APC configuration is : apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2 apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 256 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Does anyone know why APC acts like this and how to make it work well ? Thank you for your help!

    Read the article

  • How to identify who is using Hardware Reserved Memory in Windows 7

    - by blasteralfred
    I run Windows 7 x86 Home Premium. I have an installed physical memory of 4 GB, out of which, 2.96 GB is usable (My Computer Properties). I checked the memory usage using Resource Monitor and found 3036 MB / 4096 MB is available. I noticed that 1060 MB is unavailable since it is reserved by some "Hardware component(s)". I would like to know which hardware component is using this 1060 MB. Is there any way or tool to identify this? Note: I know that Windows 7 Home Premium x86 supports a maximum of 4GB RAM.

    Read the article

  • How can I tell how many bits my ssh key is?

    - by yairchu
    I already created an ssh key for myself sometime in the past. I don't remember "how many bits" it is. How can I tell? I'm wondering because I'm using hosting at nearlyfreespeech.net and their faq says: Can I configure my ssh connection to use a public key? ... we will not install keys that have a length less than 1536 bits ... We prefer that you use a key at least 2048 bits in length, and if you are generating a new key, the recommended length is 4096 bits.

    Read the article

  • Why using swap file over a SMB/NFS mounted filesystem is not possible in Linux?

    - by Avio
    I'd like to use another machine's unused RAM as swapspace for my primary Linux installation. I was just curious about performance of network ramdisks compared to local (slow) mechanical hard disks. The swapfile is on a tmpfs mountpoint and is shared through samba. However, every time I try to issue: swapon /mnt/ramswap/swapfile I get: swapon: /mnt/ramswap/swapfile: swapon failed: Invalid argument and in dmesg I read: [ 9569.806483] swapon: swapfile has holes I've tried to allocate the swapfile with dd if=/dev/zero of=swapfile bs=1024 (but also =4096 and =1048576) and with truncate -s 2G (both followed by mkswap swapfile) but the result is always the same. In this post (dated back to 2002) someone says that using a swapfile over NFS/SMB is not possible in Linux. Is this statement still valid? And if yes, what is the reason of this choice and is there any workaround to have this working?

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • Can Remote Desktop connect only 2 monitors when the client has 3 or 4 monitors connected?

    - by user12931
    I have three monitors (all 1920x1200) connected to my desktop and I would like to RDP into another computer (running Windows Server 2003) and use only two of my three monitors. The main monitor is used for notification from the LAN like email, build status, etc. I've tried modifying the .rdp file to have a higher resolution (i.e. 3840x1200 ) which sort of works but the bits per pixel goes down to 8 and you have to manually resize the window which means that you need to change it to around 3840x1150. If I supply /span to the commandline it gives me a bit more than 3840 spanned across the three monitors (is there perhaps a limitation of say 4096 for the maximum resolution?). WORKAROUND: Connect twice to the same computer. Here's an interesting post from Microsoft: http://blogs.msdn.com/b/rds/archive/2009/08/21/remote-desktop-connection-7-for-windows-7-windows-xp-windows-vista.aspx

    Read the article

  • I have 4 GB RAM, but the system only reports 3 GB? [closed]

    - by RawR Crew
    Possible Duplicate: Windows 7 64-bit is only using 3 GB out of 4 GB My HP HDX 16 laptop has 4GB of ram and runs a 64-bit system. I always go into Dxdiag for some reason and i always see it says 4096 MB of RAM. But today I felt the PC was misbehaving, so i opened it and it said 3128 MB of RAM. Where did that 1 GB go?? When I check the computer properties, it says I'm using a 64-bit system with 4.00GB of RAM, but why does everything else say it has 3?

    Read the article

  • How to calculate proper amount of inode/block sizes for a linux filesystem.

    - by Donatello
    I have an old reiser filesystem which I'm going to convert to Ext3. The problem I have is to determine the proper block- and inode-sizes for this partition. The partition is 44 GB large and has to hold 3,000,000+ files of sizes between 1 kb and 10kb, how can I figure out the best ratio of inodes and blocksize? The below is something I tried which seems OK but makes the copying files incredibly slow. mkfs.ext3 -t ext3 -c -c -b 1024 -i 4096 -I 128 -v -j -O sparse_super,filetype,has_journal /dev/sdb1 Thanks.

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >