Search Results

Search found 22986 results on 920 pages for 'allocation unit size'.

Page 205/920 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Getting an error when mounting LVM snapshot

    - by Sandra
    I have migrated a file based Xen guest to LVM using dd bs=1M if=/dev/zero of=/dev/vg00/vm10 qemu-img convert ~/vm10.qcow2 -O raw /dev/vg00/vm10 and changed the Xen domain file for the VM to use the LV instead of the old file. The VM boots up, and now on the Xen host would I like to make a snapshot of the running VM. # lvcreate --size 10G --snapshot --name vm10-snapshot /dev/vg00/vm10 Logical volume "vm10-snapshot" created # mount /dev/vg00/vm10-snapshot /mnt/snapshot/ mount: you must specify the filesystem type # dmesg |tail EXT3 FS on dm-3, internal journal EXT3-fs: mounted filesystem with ordered data mode. hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-4. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-2. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock For some reason it can't see it is an EXT3 filesystem. I have also tried to mount with -t ext3, but still didn't mount. # lvdisplay --- Logical volume --- LV Name /dev/vg00/vm10 VG Name vg00 LV UUID I1y1vQ-Bac5-5jwW-melh-TY5h-l9NO-qaelKk LV Write Access read/write LV snapshot status source of /dev/vg00/vm10-snapshot [active] LV Status available # open 2 LV Size 8.00 GB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/vg00/vm10-snapshot VG Name vg00 LV UUID GWsOx3-TPpr-GW64-uiMz-u1YN-QU4h-l0Kala LV Write Access read/write LV snapshot status active destination for /dev/vg00/vm10 LV Status available # open 0 LV Size 8.00 GB Current LE 2048 COW-table size 10.00 GB COW-table LE 2560 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 # What could the problem be?

    Read the article

  • SQL SERVER – FIX: ERROR Msg 5169, Level 16: FILEGROWTH cannot be greater than MAXSIZE for file

    - by pinaldave
    I am writing this blog post right after I resolve this error for one of the system. Recently one of the my friend who is expert in infrastructure as well private cloud was working on SQL Server installation. Please note he is seriously expert in what he does but he has never worked SQL Server before and have absolutely no experience with its installation. He was modifying database file and keep on getting following error. As soon as he saw me he asked me where is the maxfile size setting so he can change. Let us quickly re-create the scenario he was facing. Error Message: Msg 5169, Level 16, State 1, Line 1 FILEGROWTH cannot be greater than MAXSIZE for file ‘NewDB’. Creating Scenario: CREATE DATABASE [NewDB] ON PRIMARY (NAME = N'NewDB', FILENAME = N'D:\NewDB.mdf' , SIZE = 4096KB, FILEGROWTH = 1024KB, MAXSIZE = 4096KB) LOG ON (NAME = N'NewDB_log', FILENAME = N'D:\NewDB_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) GO Now let us see what exact command was creating error for him. USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB ) GO Workaround / Fix / Solution: The reason for the error is very simple. He was trying to modify the filegrowth to much higher value than the maximum file size specified for the database. There are two way we can fix it. Method 1: Reduces the filegrowth to lower value than maxsize of file USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024KB ) GO Method 2: Increase maxsize of file so it is greater than new filegrowth USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB, MAXSIZE = 4096MB) GO I think this blog post will help everybody who is facing similar issues. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • What causes "A disk read error occurred, Press Ctrl + Alt + Del to restart"?

    - by Mehrdad
    I have a virtual machine containing Windows XP SP3. When I resized the VHD file (and the embedded partition), and tried booting, I got: A disk read error occurred Press Ctrl + Alt + Del to restart Some notes: FixBoot and FixMBR don't help. ChkDsk doesn't help. The partition is indeed active. The partition starts at sector 63 (it also did so before the problem) of cylinder 1, head 1, and is marked as type 0x07 (NTFS) My host OS reads the VHD and the partition completely fine I'm interested in knowing the cause rather than the fix. So "re-format the disk", "reinstall Windows", etc. aren't valid solutions. It's a virtual machine after all... I have nothing to lose, so I don't care about fixing it. I just want to know what's causing this problem, in case I run into it again on a physical machine (which I have done before). More info: The layout of the original, dynamic VHD (which works correctly): +-----------------------------------------------------------------------------+ ¦ Disk: 3 MBR/GPT: MBR ¦ ¦ Size: 127.00GB CHS: 16578 255 63 ¦ ¦ Sectors: 266338304 Disk Signature: 0xEE3EEE3E ¦ ¦ Partitions: 1 Partition Order: 1 ¦ ¦ Media Type: Fixed Interface: SCSI ¦ ¦ Description: Msft Virtual Disk ¦ +-----------------------------------------------------------------------------¦ ¦Pos Idx Type/Name Size Boot Hide Start Sector Total Sectors DL Vol Label ¦ +--- --- --------- ---- ---- ---- -------------- -------------- -- -----------¦ ¦ 1 1 07-NTFS 1.5G Yes No 63 3,148,677 F: <None> ¦ +-----------------------------------------------------------------------------+ The layout of the resized, fixed-size VHD (which doesn't work): +-----------------------------------------------------------------------------+ ¦ Disk: 3 MBR/GPT: MBR ¦ ¦ Size: 1.50GB CHS: 196 255 63 ¦ ¦ Sectors: 3149824 Disk Signature: 0xEE3EEE3E ¦ ¦ Partitions: 1 Partition Order: 1 ¦ ¦ Media Type: Fixed Interface: SCSI ¦ ¦ Description: Msft Virtual Disk ¦ +-----------------------------------------------------------------------------¦ ¦Pos Idx Type/Name Size Boot Hide Start Sector Total Sectors DL Vol Label ¦ +--- --- --------- ---- ---- ---- -------------- -------------- -- -----------¦ ¦ 1 1 07-NTFS 1.5G Yes No 63 3,148,677 F: <None> ¦ +-----------------------------------------------------------------------------+

    Read the article

  • java slick2D - problem using ScalableGame class

    - by nellykvist
    I have problem adjusting the size of the screen, using the ScalableGame class from Slick2D library. So, what I want to achieve, whenever I change display size, background should adjust to screen size, and objects (images, grahpic shapes) should fit (scale). Alright, so this is how state looks by default. I can change screen size, but images and graphic shapes does not appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, ScalableGame constructor: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, to display: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, Settings.video.isFullScreen()); appGameContainer.start();

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • where is memory gone (no, not buffers or cache)

    - by Marki
    can anyone tell me where the memory is gone: (no, this time neither buffers nor cache) # free total used free shared buffers cached Mem: 3928200 3868560 59640 0 2888 92924 -/+ buffers/cache: 3772748 155452 Swap: 4192956 226352 3966604 top, sorted by memory, descending: top - 13:42:06 up 1 day, 3:47, 2 users, load average: 0.08, 0.12, 0.36 Tasks: 228 total, 1 running, 227 sleeping, 0 stopped, 0 zombie Cpu0 : 2.0%us, 4.0%sy, 0.0%ni, 90.1%id, 0.0%wa, 0.0%hi, 4.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni, 0.0%id,100.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3928200k total, 3868020k used, 60180k free, 2896k buffers Swap: 4192956k total, 226048k used, 3966908k free, 82068k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3863 root 20 0 902m 199m 3296 S 7 5.2 99:08.77 ndsd 21906 root 20 0 138m 9076 2988 S 0 0.2 0:00.02 sfcbd 2332 root 20 0 126m 4660 1332 S 0 0.1 0:17.72 mono 4243 wwwrun 20 0 683m 4468 668 S 0 0.1 0:07.38 java 2994 root 20 0 202m 2288 1660 S 0 0.1 6:10.02 httpstkd 4338 root 20 0 184m 2240 1112 S 0 0.1 0:00.52 namcd 21898 root 20 0 32368 1832 1256 R 1 0.0 0:00.08 top In fact, some time ago oom kicked in and crashed the system (kernel panic), and I'm afraid we're again not far from that point.... UPDATE # cat /proc/meminfo MemTotal: 3928200 kB MemFree: 51336 kB Buffers: 2964 kB Cached: 72876 kB SwapCached: 29128 kB Active: 233440 kB Inactive: 88040 kB Active(anon): 188920 kB Inactive(anon): 56752 kB Active(file): 44520 kB Inactive(file): 31288 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4192956 kB SwapFree: 3966824 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 225112 kB Mapped: 11356 kB Shmem: 32 kB Slab: 1624080 kB SReclaimable: 13740 kB SUnreclaim: 1610340 kB KernelStack: 4176 kB PageTables: 10500 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 6157056 kB Committed_AS: 2397684 kB VmallocTotal: 34359738367 kB VmallocUsed: 441372 kB VmallocChunk: 34359246755 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10240 kB DirectMap2M: 4184064 kB slabtop Active / Total Objects (% used) : 9041019 / 9207548 (98.2%) Active / Total Slabs (% used) : 401132 / 401156 (100.0%) Active / Total Caches (% used) : 91 / 159 (57.2%) Active / Total Size (% used) : 1491537.88K / 1519791.56K (98.1%) Minimum / Average / Maximum Object : 0.02K / 0.17K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 4240470 4240319 99% 0.12K 141349 30 565396K pid 2245140 2219675 98% 0.25K 149676 15 598704K size-256 2238090 2210087 98% 0.12K 74603 30 298412K size-128 ...

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • Improving TCP performance over a gigabit network with lots of connections and high traffic of small packets

    - by MinimeDJ
    I’m trying to improve my TCP throughput over a “gigabit network with lots of connections and high traffic of small packets”. My server OS is Ubuntu 11.10 Server 64bit. There are about 50.000 (and growing) clients connected to my server through TCP Sockets (all on the same port). 95% of of my packets have size of 1-150 bytes (TCP header and payload). The rest 5% vary from 150 up to 4096+ bytes. With the config below my server can handle traffic up to 30 Mbps (full duplex). Can you please advice best practice to tune OS for my needs? My /etc/sysctl.cong looks like this: kernel.pid_max = 1000000 net.ipv4.ip_local_port_range = 2500 65000 fs.file-max = 1000000 # net.core.netdev_max_backlog=3000 net.ipv4.tcp_sack=0 # net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.somaxconn = 2048 # net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_mem = 50576 64768 98152 # net.core.wmem_default = 65536 net.core.rmem_default = 65536 net.ipv4.tcp_window_scaling=1 # net.ipv4.tcp_mem= 98304 131072 196608 # net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_rfc1337 = 1 net.ipv4.ip_forward = 0 net.ipv4.tcp_congestion_control=cubic net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 0 # net.ipv4.tcp_orphan_retries = 1 net.ipv4.tcp_fin_timeout = 25 net.ipv4.tcp_max_orphans = 8192 Here are my limits: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 193045 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1000000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1000000 [ADDED] My NICs are the following: $ dmesg | grep Broad [ 2.473081] Broadcom NetXtreme II 5771x 10Gigabit Ethernet Driver bnx2x 1.62.12-0 (2011/03/20) [ 2.477808] bnx2x 0000:02:00.0: eth0: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fb000000, IRQ 28, node addr d8:d3:85:bd:23:08 [ 2.482556] bnx2x 0000:02:00.1: eth1: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fa000000, IRQ 40, node addr d8:d3:85:bd:23:0c [ADDED 2] ethtool -k eth0 Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp-segmentation-offload: on udp-fragmentation-offload: off generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: on rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off receive-hashing: off [ADDED 3] sudo ethtool -S eth0|grep -vw 0 NIC statistics: [1]: rx_bytes: 17521104292 [1]: rx_ucast_packets: 118326392 [1]: tx_bytes: 35351475694 [1]: tx_ucast_packets: 191723897 [2]: rx_bytes: 16569945203 [2]: rx_ucast_packets: 114055437 [2]: tx_bytes: 36748975961 [2]: tx_ucast_packets: 194800859 [3]: rx_bytes: 16222309010 [3]: rx_ucast_packets: 109397802 [3]: tx_bytes: 36034786682 [3]: tx_ucast_packets: 198238209 [4]: rx_bytes: 14884911384 [4]: rx_ucast_packets: 104081414 [4]: rx_discards: 5828 [4]: rx_csum_offload_errors: 1 [4]: tx_bytes: 35663361789 [4]: tx_ucast_packets: 194024824 [5]: rx_bytes: 16465075461 [5]: rx_ucast_packets: 110637200 [5]: tx_bytes: 43720432434 [5]: tx_ucast_packets: 202041894 [6]: rx_bytes: 16788706505 [6]: rx_ucast_packets: 113123182 [6]: tx_bytes: 38443961940 [6]: tx_ucast_packets: 202415075 [7]: rx_bytes: 16287423304 [7]: rx_ucast_packets: 110369475 [7]: rx_csum_offload_errors: 1 [7]: tx_bytes: 35104168638 [7]: tx_ucast_packets: 184905201 [8]: rx_bytes: 12689721791 [8]: rx_ucast_packets: 87616037 [8]: rx_discards: 2638 [8]: tx_bytes: 36133395431 [8]: tx_ucast_packets: 196547264 [9]: rx_bytes: 15007548011 [9]: rx_ucast_packets: 98183525 [9]: rx_csum_offload_errors: 1 [9]: tx_bytes: 34871314517 [9]: tx_ucast_packets: 188532637 [9]: tx_mcast_packets: 12 [10]: rx_bytes: 12112044826 [10]: rx_ucast_packets: 84335465 [10]: rx_discards: 2494 [10]: tx_bytes: 36562151913 [10]: tx_ucast_packets: 195658548 [11]: rx_bytes: 12873153712 [11]: rx_ucast_packets: 89305791 [11]: rx_discards: 2990 [11]: tx_bytes: 36348541675 [11]: tx_ucast_packets: 194155226 [12]: rx_bytes: 12768100958 [12]: rx_ucast_packets: 89350917 [12]: rx_discards: 2667 [12]: tx_bytes: 35730240389 [12]: tx_ucast_packets: 192254480 [13]: rx_bytes: 14533227468 [13]: rx_ucast_packets: 98139795 [13]: tx_bytes: 35954232494 [13]: tx_ucast_packets: 194573612 [13]: tx_bcast_packets: 2 [14]: rx_bytes: 13258647069 [14]: rx_ucast_packets: 92856762 [14]: rx_discards: 3509 [14]: rx_csum_offload_errors: 1 [14]: tx_bytes: 35663586641 [14]: tx_ucast_packets: 189661305 rx_bytes: 226125043936 rx_ucast_packets: 1536428109 rx_bcast_packets: 351 rx_discards: 20126 rx_filtered_packets: 8694 rx_csum_offload_errors: 11 tx_bytes: 548442367057 tx_ucast_packets: 2915571846 tx_mcast_packets: 12 tx_bcast_packets: 2 tx_64_byte_packets: 35417154 tx_65_to_127_byte_packets: 2006984660 tx_128_to_255_byte_packets: 373733514 tx_256_to_511_byte_packets: 378121090 tx_512_to_1023_byte_packets: 77643490 tx_1024_to_1522_byte_packets: 43669214 tx_pause_frames: 228 Some info about SACK: When to turn TCP SACK off?

    Read the article

  • “Disk /dev/xvda1 doesn't contain a valid partition table”

    - by Simpanoz
    Iam newbie to EC2 and Ubuntu 11 (EC2 Free tier Ubuntu). I have made following commands. sudo mkfs -t ext4 /dev/xvdf6 sudo mkdir /db sudo vim /etc/fstab /dev/xvdf6 /db ext4 noatime,noexec,nodiratime 0 0 sudo mount /dev/xvdf6 /db fdisk -l I got following output. Can some one guide me what I am doing wrong and how it can be rectified. Disk /dev/xvda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table Disk /dev/xvdf6: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders, total 12582912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvdf6 doesn't contain a valid partition table.

    Read the article

  • My experience working with Teradata SQL Assistant

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/05/28/my-experience-working-with-teradata-sql-assistant.aspx To this date, I still haven't figure out how to "toggle" between my query windows. It seems like unless I click on that "new" button on top, whatever SQL I generate from right-click just overrides the current SQL in the window. I'm probably missing a "generate new sql in new window" setting The default Teradata SQL Assistant doesn't execute just the SQL query I highlighted. There is a setting I have to change first. I'm not really happy that the SQL assistant and SQL admin are different app. Still trying to get used to the fact that I can't quickly look up a table's keys/relationships while writing query. I have to switch between windows. LOVE the execution plan / explanation. I think that part is better done than MS SQL in some ways. The error messages can be better. I feel that Teradata .NET provider sends smaller query command over than others. I don't have any hard data to support my claim. One of my query in SSRS was passing multi-valued parameters to another query, and got error "Teradata 3577 row size or sort key size overflow". The search on this error says the solution is to cast result column into smaller data type, but I found that the problem was that the parameter passed into the where clause could not be too large. I wish Teradata SQL Assistant would remember the window size I just adjusted to. Every time I execute the query, the result set, query, and exec log auto re-adjust back to the default size. In SSMS, if I adjust the result set area to be smaller, it would stay like that if I execute query in the same window.

    Read the article

  • Can we decrease page swapping?

    - by Benjamin
    My system has a 5GB RAM. And my paging file size is 2GB Even though I have many RAM, page-swapping still occurs. But I don't want to that. I know how adjust the paging file size. If I resize the paging file size(ex 200MB?), Doesn't Windows System do any swapping? Are there side-effects?

    Read the article

  • Javascript autotab function not working on iPad or iPhone [migrated]

    - by freddy6
    I have this this piece of html code: <form name="postcode" method="post" onsubmit="return OnSubmitForm();"> <input class="postcode" maxlength="1" size="1" name="c" onKeyup="autotab(this, document.postcode.o)" /> <input class="postcode" maxlength="1" size="1" name="o" onKeyup="autotab(this, document.postcode.d)" /> <input class="postcode" maxlength="1" size="1" name="d" onKeyup="autotab(this, document.postcode.e)" /> <input class="postcode" maxlength="1" size="1" name="e" /> <br /> </form> which uses this javascript: <script> /* Auto tabbing script- By JavaScriptKit.com http://www.javascriptkit.com This credit MUST stay intact for use */ function autotab(original,destination){ if (original.getAttribute&&original.value.length==original.getAttribute("maxlength")) destination.focus() } </script><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script src="js/scripts.js" type="text/javascript"></script> <script type="text/javascript"> function OnSubmitForm() { if(document.postcode.operation[0].checked == true) { document.postcode.action ="plans.php"; } else if(document.postcode.operation[1].checked == true) { document.postcode.action ="plans_gas.php"; } else if(document.postcode.operation[2].checked == true) { document.postcode.action ="plans_duel.php"; } return true; } </script> As soon a you enter in one character into one of the text boxes it automatically tabs across the the next text box. This works fine on a pc or mac and on safari and also in all other browsers. But when viewing the webpage on an iPad or iPhone (using safari) the auto tabbing function does not work. Any ideas on how to make the auto tab work on these mobile devices?

    Read the article

  • SSH_ORIGINAL_ENVIRONMENT error with snow leopard client to a gitosis server on debian

    - by Mica
    I have a server running gitosis (installed from the package manager) on debian lenny. I am able to perform all operations from my linux mint laptop, but from my Mac running an up-to-date Snow Leopard gives me the following error: mica@waste Desktop$ git clone [email protected]:Poems.git Initialized empty Git repository in /Users/micas/Desktop/Poems/.git/ ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly mica@waste Desktop$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to 192.168.0.156 [192.168.0.156] port 22. debug1: Connection established. debug1: identity file /Users/micas/.ssh/identity type -1 debug1: identity file /Users/micas/.ssh/id_rsa type 1 debug1: identity file /Users/micas/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.0.156' is known and matches the RSA host key. debug1: Found key in /Users/mica/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering public key: /Users/mica/.ssh/id_rsa debug1: Remote: Forced command: gitosis-serve mica@waste debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: Remote: Forced command: gitosis-serve micas@waste debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Requesting authentication agent forwarding. PTY allocation request failed on channel 0 ERROR:gitosis.serve.main:Need SSH_ORIGINAL_COMMAND in environment. debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to 192.168.0.156 closed. Transferred: sent 2544, received 2888 bytes, in 0.1 seconds Bytes per second: sent 29642.1, received 33650.3 debug1: Exit status 1 Extensive googling of the error isn't returning much-- I changed the /etc/sshd_config file on my Mac as per http://www.schmidp.com/2009/06/23/enable-ssh-agent-key-forwarding-on-snow-leopard/. I still get the same error.

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Unable to create context rendering error when run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • GlusterFs - high load 90-107% CPU

    - by Sara
    I try and try and try to performance and fix problem with gluster, i try all. I served on gluster webpages, php files, images etc. I have problem after update from 3.3.0 to 3.3.1. I try 3.4 when i think maybe fix it but still the same problem. I temporarily have 1 brick, but before upgrade will be fine. Config: Volume Name: ... Type: Replicate Volume ID: ... Status: Started Number of Bricks: 0 x 2 = 1 Transport-type: tcp Bricks: Brick1: ...:/... Options Reconfigured: cluster.stripe-block-size: 128KB performance.cache-max-file-size: 100MB performance.flush-behind: on performance.io-thread-count: 16 performance.cache-size: 256MB auth.allow: ... performance.cache-refresh-timeout: 5 performance.write-behind-window-size: 1024MB I use fuse, hmm "Maybe the high load is due to the unavailable brick" i think about it, but i cant find information on how to safely change type of volume. Maybe u know how?

    Read the article

  • Mail server not sending or receiving after removal from barracuda blacklist to white list

    - by user137765
    Mail server not sending or receiving after removal from barracuda blacklist to white list. I've checked against black lists and the ip and domain are clean. 1and1 are saying its Barracuda black list and barracuda are saying its not blacklisted and that its somethign with 1and1 server. section from log file... Sep 20 04:29:25 vegaserve postfix/smtpd[16906]: connect from mta860.chtah.net[63.236.31.146] Sep 20 04:29:25 vegaserve postfix/smtpd[16070]: connect from host81-136-144-117.in-addr.btopenworld.com[81.136.144.117] Sep 20 04:29:27 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: raidon - short names not allowed from @ [201.80.253.153]ERR: 1348111767.185119 LOGOUT, [email protected], ip=[86.143.136.249], top=0, retr=0, time=151, rcvd=18, sent=283, maildir=/var/qmail/mailnames/mbelectrics.net/mb/Maildir Sep 20 04:29:28 vegaserve pop3d: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:28 vegaserve postfix/smtpd[15388]: connect from mta965.emails.itv.com[8.30.201.55] Sep 20 04:29:29 vegaserve postfix/smtpd[18194]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:29 vegaserve postfix/cleanup[24879]: 95CB31E87556C: message-id=<[email protected] Sep 20 04:29:29 vegaserve postfix/qmgr[14378]: 95CB31E87556C: from=, size=975, nrcpt=1 (queue active) Sep 20 04:29:29 vegaserve postfix/smtpd[18194]: disconnect from uspmta172097.emarsys.net[195.54.172.97] Sep 20 04:29:29 vegaserve postfix/smtp[25748]: 95CB31E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.05/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:29 vegaserve postfix/bounce[25897]: warning: 95CB31E87556C: undeliverable postmaster notification discarded Sep 20 04:29:29 vegaserve postfix/qmgr[14378]: 95CB31E87556C: removed Sep 20 04:29:32 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:37 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rei - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:38 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:38 vegaserve postfix/smtpd[19328]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/smtpd[18331]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/smtpd[24464]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/cleanup[24825]: BD1A71E87556C: message-id=<[email protected] Sep 20 04:29:40 vegaserve postfix/qmgr[14378]: BD1A71E87556C: from=, size=673, nrcpt=1 (queue active) Sep 20 04:29:40 vegaserve postfix/smtpd[24464]: disconnect from unknown[118.97.212.190] Sep 20 04:29:40 vegaserve postfix/smtp[25748]: BD1A71E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:40 vegaserve postfix/bounce[25995]: warning: BD1A71E87556C: undeliverable postmaster notification discarded Sep 20 04:29:40 vegaserve postfix/qmgr[14378]: BD1A71E87556C: removed Sep 20 04:29:41 vegaserve postfix/cleanup[24879]: 0A42B1E87556C: message-id=<[email protected] Sep 20 04:29:41 vegaserve postfix/qmgr[14378]: 0A42B1E87556C: from=, size=961, nrcpt=1 (queue active) Sep 20 04:29:41 vegaserve postfix/smtpd[18331]: disconnect from bay0-omc4-s10.bay0.hotmail.com[65.54.190.212] Sep 20 04:29:41 vegaserve postfix/smtp[25748]: 0A42B1E87556C: to=, orig_to=, relay=none, delay=0.03, delays=0.03/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:41 vegaserve postfix/bounce[25897]: warning: 0A42B1E87556C: undeliverable postmaster notification discarded Sep 20 04:29:41 vegaserve postfix/qmgr[14378]: 0A42B1E87556C: removed Sep 20 04:29:43 vegaserve postfix/smtpd[17511]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:43 vegaserve postfix/cleanup[24825]: 8F8991E87556C: message-id=<[email protected] Sep 20 04:29:43 vegaserve postfix/qmgr[14378]: 8F8991E87556C: from=, size=946, nrcpt=1 (queue active) Sep 20 04:29:43 vegaserve postfix/smtpd[17511]: disconnect from blu0-omc4-s22.blu0.hotmail.com[65.55.111.161] Sep 20 04:29:43 vegaserve postfix/smtp[25748]: 8F8991E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.02/0/0.02/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:43 vegaserve postfix/bounce[25995]: warning: 8F8991E87556C: undeliverable postmaster notification discarded Sep 20 04:29:43 vegaserve postfix/qmgr[14378]: 8F8991E87556C: removed Sep 20 04:29:44 vegaserve postfix/cleanup[24879]: 088641E87556C: message-id=<[email protected] Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 088641E87556C: from=, size=1078, nrcpt=1 (queue active) Sep 20 04:29:44 vegaserve postfix/smtpd[19328]: disconnect from smtp10.bis7.eu.blackberry.com[178.239.85.15] Sep 20 04:29:44 vegaserve postfix/smtp[25748]: 088641E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.03/0/0.01/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:44 vegaserve postfix/bounce[25995]: warning: 088641E87556C: undeliverable postmaster notification discarded Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 088641E87556C: removed Sep 20 04:29:44 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rin - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:44 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:44 vegaserve postfix/smtpd[18965]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:44 vegaserve postfix/cleanup[24825]: 946F51E87556C: message-id=<[email protected] Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 946F51E87556C: from=, size=1173, nrcpt=1 (queue active) Sep 20 04:29:44 vegaserve postfix/smtpd[18965]: disconnect from hubrelay-rd.bt.com[62.239.224.99] Sep 20 04:29:44 vegaserve postfix/smtp[25748]: 946F51E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:44 vegaserve postfix/bounce[25897]: warning: 946F51E87556C: undeliverable postmaster notification discarded Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 946F51E87556C: removed Sep 20 04:29:45 vegaserve postfix/smtpd[14816]: connect from col0-omc2-s12.col0.hotmail.com[65.55.34.86] Sep 20 04:29:47 vegaserve postfix/smtpd[16900]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:47 vegaserve postfix/cleanup[24879]: 961721E87556C: message-id=<[email protected] Sep 20 04:29:47 vegaserve postfix/qmgr[14378]: 961721E87556C: from=, size=1082, nrcpt=1 (queue active) Sep 20 04:29:47 vegaserve postfix/smtpd[16900]: disconnect from mta-35d2.livingsocial.com[199.91.53.210] Sep 20 04:29:47 vegaserve postfix/smtp[25748]: 961721E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:47 vegaserve postfix/bounce[25995]: warning: 961721E87556C: undeliverable postmaster notification discarded Sep 20 04:29:47 vegaserve postfix/qmgr[14378]: 961721E87556C: removed Sep 20 04:29:50 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rini - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:50 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:52 vegaserve postfix/smtpd[24478]: connect from col0-omc2-s13.col0.hotmail.com[65.55.34.87] Sep 20 04:29:52 vegaserve postfix/smtpd[18923]: connect from www.idbwplan.com[193.181.254.21] Sep 20 04:29:55 vegaserve postfix/smtpd[15968]: connect from 105-48.mta.dotmailer.com[94.143.105.48] Sep 20 04:29:56 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: ringo - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:56 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:30:00 vegaserve postfix/smtpd[18772]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:30:01 vegaserve postfix/cleanup[24825]: 1DAD71E87556C: message-id=<[email protected] Sep 20 04:30:01 vegaserve postfix/qmgr[14378]: 1DAD71E87556C: from=, size=1022, nrcpt=1 (queue active) Sep 20 04:30:01 vegaserve postfix/smtpd[18772]: disconnect from mail95.us2.mcsv.net[173.231.139.95] Sep 20 04:30:01 vegaserve postfix/smtp[25748]: 1DAD71E87556C: to=, orig_to=, relay=none, delay=0.06, delays=0.05/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:30:01 vegaserve postfix/bounce[25897]: warning: 1DAD71E87556C: undeliverable postmaster notification discarded Sep 20 04:30:01 vegaserve postfix/qmgr[14378]: 1DAD71E87556C: removed Sep 20 04:30:02 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: ritsuko - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:30:02 vegaserve postfix/smtpd[16911]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:30:02 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:30:02 vegaserve postfix/cleanup[24879]: 8AADD1E87556C: message-id=<[email protected] Sep 20 04:30:02 vegaserve postfix/qmgr[14378]: 8AADD1E87556C: from=, size=1003, nrcpt=1 (queue active) Sep 20 04:30:02 vegaserve postfix/smtpd[16911]: disconnect from mr133.createsend.com[184.106.86.133] Sep 20 04:30:02 vegaserve postfix/smtp[25748]: 8AADD1E87556C: to=, orig_to=, relay=none, delay=0.02, delays=0.02/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself)

    Read the article

  • Unable to create context rendering error whet run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • limits.conf to set memory limits

    - by Rupert Jipe
    I would like to limit any process from using more than 500 MB of RAM. AFAIK this is done using RSS in /etc/security/limits.conf but the process called gnome-panel apparently is using 618436 kB of VmRSS. How can this be ? /etc/security/limits.conf * hard rss 512000 username@debian:~$ cat /proc/3002/status Name: gnome-panel State: S (sleeping) Tgid: 3002 Pid: 3002 PPid: 2910 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 20 24 25 29 44 46 112 116 117 1000 1002 1003 VmPeak: 916636 kB VmSize: 916636 kB VmLck: 0 kB VmHWM: 618436 kB VmRSS: 618436 kB VmData: 601972 kB VmStk: 104 kB VmExe: 516 kB VmLib: 29232 kB VmPTE: 1760 kB Threads: 1 SigQ: 0/14001 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000020001000 SigCgt: 0000000180000000 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: 3 Cpus_allowed_list: 0-1 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 871965 nonvoluntary_ctxt_switches: 47553 PaX: PeMRs username@debian:~$ cat /proc/3002/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Missing Operating System after trying to upgrade to Ubuntu 11

    - by Mauricio
    there! After trying to upgrade from Ubuntu 10.04 to 11, the upgrading process stopped when running and then I got an "out of disk, grub rescue" message when booting. After running Boot Repair, I got this results. Now I get "Missing Operating System" whent trying to boot. Bellow I show some results from some commands I gather from help foruns, but I still reached no solution. Could you please help me? Any enlightment will be very helpful! Disk Utility says "Disk has a few bad sectors". When trying to run the Self-test I get "FAILED (Read)" Here we have what Gparted says about the /dev/sda1 partition (ext4): Flags: boot Status: not mounted Warning: e2label: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda1Couldn`t find valid filesystem superblockUnable to read the contents of this filesystem! From sudo fdisk -lI got: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000e0596 Device Boot Start End Blocks Id System/dev/sda1 * 2048 607428607 303713280 83 Linux/dev/sda2 607430654 625141759 8855553 5 Extended/dev/sda5 607430656 625141759 8855552 82 Linux swap / SolarisDisk /dev/sdb: 320.1 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c3c41 Device Boot Start End Blocks Id System /dev/sdb1 * 63 625137344 312568641 c W95 FAT32 (LBA) " and fromsudo fdisk /dev/sda1I got fdisk: unable to read /dev/sda1: Inappropriate ioctl for device` From sudo mount /dev/sda1 /mntI got: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so From sudo update-grubI got: error: cannot read from `/dev/sda'. /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?).

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >