Search Results

Search found 23939 results on 958 pages for 'block size'.

Page 167/958 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • SQL SERVER – FIX: ERROR Msg 5169, Level 16: FILEGROWTH cannot be greater than MAXSIZE for file

    - by pinaldave
    I am writing this blog post right after I resolve this error for one of the system. Recently one of the my friend who is expert in infrastructure as well private cloud was working on SQL Server installation. Please note he is seriously expert in what he does but he has never worked SQL Server before and have absolutely no experience with its installation. He was modifying database file and keep on getting following error. As soon as he saw me he asked me where is the maxfile size setting so he can change. Let us quickly re-create the scenario he was facing. Error Message: Msg 5169, Level 16, State 1, Line 1 FILEGROWTH cannot be greater than MAXSIZE for file ‘NewDB’. Creating Scenario: CREATE DATABASE [NewDB] ON PRIMARY (NAME = N'NewDB', FILENAME = N'D:\NewDB.mdf' , SIZE = 4096KB, FILEGROWTH = 1024KB, MAXSIZE = 4096KB) LOG ON (NAME = N'NewDB_log', FILENAME = N'D:\NewDB_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) GO Now let us see what exact command was creating error for him. USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB ) GO Workaround / Fix / Solution: The reason for the error is very simple. He was trying to modify the filegrowth to much higher value than the maximum file size specified for the database. There are two way we can fix it. Method 1: Reduces the filegrowth to lower value than maxsize of file USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024KB ) GO Method 2: Increase maxsize of file so it is greater than new filegrowth USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB, MAXSIZE = 4096MB) GO I think this blog post will help everybody who is facing similar issues. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What causes "A disk read error occurred, Press Ctrl + Alt + Del to restart"?

    - by Mehrdad
    I have a virtual machine containing Windows XP SP3. When I resized the VHD file (and the embedded partition), and tried booting, I got: A disk read error occurred Press Ctrl + Alt + Del to restart Some notes: FixBoot and FixMBR don't help. ChkDsk doesn't help. The partition is indeed active. The partition starts at sector 63 (it also did so before the problem) of cylinder 1, head 1, and is marked as type 0x07 (NTFS) My host OS reads the VHD and the partition completely fine I'm interested in knowing the cause rather than the fix. So "re-format the disk", "reinstall Windows", etc. aren't valid solutions. It's a virtual machine after all... I have nothing to lose, so I don't care about fixing it. I just want to know what's causing this problem, in case I run into it again on a physical machine (which I have done before). More info: The layout of the original, dynamic VHD (which works correctly): +-----------------------------------------------------------------------------+ ¦ Disk: 3 MBR/GPT: MBR ¦ ¦ Size: 127.00GB CHS: 16578 255 63 ¦ ¦ Sectors: 266338304 Disk Signature: 0xEE3EEE3E ¦ ¦ Partitions: 1 Partition Order: 1 ¦ ¦ Media Type: Fixed Interface: SCSI ¦ ¦ Description: Msft Virtual Disk ¦ +-----------------------------------------------------------------------------¦ ¦Pos Idx Type/Name Size Boot Hide Start Sector Total Sectors DL Vol Label ¦ +--- --- --------- ---- ---- ---- -------------- -------------- -- -----------¦ ¦ 1 1 07-NTFS 1.5G Yes No 63 3,148,677 F: <None> ¦ +-----------------------------------------------------------------------------+ The layout of the resized, fixed-size VHD (which doesn't work): +-----------------------------------------------------------------------------+ ¦ Disk: 3 MBR/GPT: MBR ¦ ¦ Size: 1.50GB CHS: 196 255 63 ¦ ¦ Sectors: 3149824 Disk Signature: 0xEE3EEE3E ¦ ¦ Partitions: 1 Partition Order: 1 ¦ ¦ Media Type: Fixed Interface: SCSI ¦ ¦ Description: Msft Virtual Disk ¦ +-----------------------------------------------------------------------------¦ ¦Pos Idx Type/Name Size Boot Hide Start Sector Total Sectors DL Vol Label ¦ +--- --- --------- ---- ---- ---- -------------- -------------- -- -----------¦ ¦ 1 1 07-NTFS 1.5G Yes No 63 3,148,677 F: <None> ¦ +-----------------------------------------------------------------------------+

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • java slick2D - problem using ScalableGame class

    - by nellykvist
    I have problem adjusting the size of the screen, using the ScalableGame class from Slick2D library. So, what I want to achieve, whenever I change display size, background should adjust to screen size, and objects (images, grahpic shapes) should fit (scale). Alright, so this is how state looks by default. I can change screen size, but images and graphic shapes does not appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, ScalableGame constructor: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, true) ); appGameContainer.setDisplayMode(Settings.video.getWidth(), Settings.video.getHeight(), Settings.video.isFullScreen()); appGameContainer.start(); If I assign to width/height +100, to display: appGameContainer = new AppGameContainer(     new ScalableGame(new AppStateController(), Settings.video.getWidth(), Settings.video.getHeight(), true) ); appGameContainer.setDisplayMode(Settings.video.getWidth() + 100, Settings.video.getHeight() + 100, Settings.video.isFullScreen()); appGameContainer.start();

    Read the article

  • where is memory gone (no, not buffers or cache)

    - by Marki
    can anyone tell me where the memory is gone: (no, this time neither buffers nor cache) # free total used free shared buffers cached Mem: 3928200 3868560 59640 0 2888 92924 -/+ buffers/cache: 3772748 155452 Swap: 4192956 226352 3966604 top, sorted by memory, descending: top - 13:42:06 up 1 day, 3:47, 2 users, load average: 0.08, 0.12, 0.36 Tasks: 228 total, 1 running, 227 sleeping, 0 stopped, 0 zombie Cpu0 : 2.0%us, 4.0%sy, 0.0%ni, 90.1%id, 0.0%wa, 0.0%hi, 4.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni, 0.0%id,100.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3928200k total, 3868020k used, 60180k free, 2896k buffers Swap: 4192956k total, 226048k used, 3966908k free, 82068k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3863 root 20 0 902m 199m 3296 S 7 5.2 99:08.77 ndsd 21906 root 20 0 138m 9076 2988 S 0 0.2 0:00.02 sfcbd 2332 root 20 0 126m 4660 1332 S 0 0.1 0:17.72 mono 4243 wwwrun 20 0 683m 4468 668 S 0 0.1 0:07.38 java 2994 root 20 0 202m 2288 1660 S 0 0.1 6:10.02 httpstkd 4338 root 20 0 184m 2240 1112 S 0 0.1 0:00.52 namcd 21898 root 20 0 32368 1832 1256 R 1 0.0 0:00.08 top In fact, some time ago oom kicked in and crashed the system (kernel panic), and I'm afraid we're again not far from that point.... UPDATE # cat /proc/meminfo MemTotal: 3928200 kB MemFree: 51336 kB Buffers: 2964 kB Cached: 72876 kB SwapCached: 29128 kB Active: 233440 kB Inactive: 88040 kB Active(anon): 188920 kB Inactive(anon): 56752 kB Active(file): 44520 kB Inactive(file): 31288 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4192956 kB SwapFree: 3966824 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 225112 kB Mapped: 11356 kB Shmem: 32 kB Slab: 1624080 kB SReclaimable: 13740 kB SUnreclaim: 1610340 kB KernelStack: 4176 kB PageTables: 10500 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 6157056 kB Committed_AS: 2397684 kB VmallocTotal: 34359738367 kB VmallocUsed: 441372 kB VmallocChunk: 34359246755 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10240 kB DirectMap2M: 4184064 kB slabtop Active / Total Objects (% used) : 9041019 / 9207548 (98.2%) Active / Total Slabs (% used) : 401132 / 401156 (100.0%) Active / Total Caches (% used) : 91 / 159 (57.2%) Active / Total Size (% used) : 1491537.88K / 1519791.56K (98.1%) Minimum / Average / Maximum Object : 0.02K / 0.17K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 4240470 4240319 99% 0.12K 141349 30 565396K pid 2245140 2219675 98% 0.25K 149676 15 598704K size-256 2238090 2210087 98% 0.12K 74603 30 298412K size-128 ...

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • Improving TCP performance over a gigabit network with lots of connections and high traffic of small packets

    - by MinimeDJ
    I’m trying to improve my TCP throughput over a “gigabit network with lots of connections and high traffic of small packets”. My server OS is Ubuntu 11.10 Server 64bit. There are about 50.000 (and growing) clients connected to my server through TCP Sockets (all on the same port). 95% of of my packets have size of 1-150 bytes (TCP header and payload). The rest 5% vary from 150 up to 4096+ bytes. With the config below my server can handle traffic up to 30 Mbps (full duplex). Can you please advice best practice to tune OS for my needs? My /etc/sysctl.cong looks like this: kernel.pid_max = 1000000 net.ipv4.ip_local_port_range = 2500 65000 fs.file-max = 1000000 # net.core.netdev_max_backlog=3000 net.ipv4.tcp_sack=0 # net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.somaxconn = 2048 # net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_mem = 50576 64768 98152 # net.core.wmem_default = 65536 net.core.rmem_default = 65536 net.ipv4.tcp_window_scaling=1 # net.ipv4.tcp_mem= 98304 131072 196608 # net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_rfc1337 = 1 net.ipv4.ip_forward = 0 net.ipv4.tcp_congestion_control=cubic net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 0 # net.ipv4.tcp_orphan_retries = 1 net.ipv4.tcp_fin_timeout = 25 net.ipv4.tcp_max_orphans = 8192 Here are my limits: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 193045 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1000000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1000000 [ADDED] My NICs are the following: $ dmesg | grep Broad [ 2.473081] Broadcom NetXtreme II 5771x 10Gigabit Ethernet Driver bnx2x 1.62.12-0 (2011/03/20) [ 2.477808] bnx2x 0000:02:00.0: eth0: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fb000000, IRQ 28, node addr d8:d3:85:bd:23:08 [ 2.482556] bnx2x 0000:02:00.1: eth1: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fa000000, IRQ 40, node addr d8:d3:85:bd:23:0c [ADDED 2] ethtool -k eth0 Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp-segmentation-offload: on udp-fragmentation-offload: off generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: on rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off receive-hashing: off [ADDED 3] sudo ethtool -S eth0|grep -vw 0 NIC statistics: [1]: rx_bytes: 17521104292 [1]: rx_ucast_packets: 118326392 [1]: tx_bytes: 35351475694 [1]: tx_ucast_packets: 191723897 [2]: rx_bytes: 16569945203 [2]: rx_ucast_packets: 114055437 [2]: tx_bytes: 36748975961 [2]: tx_ucast_packets: 194800859 [3]: rx_bytes: 16222309010 [3]: rx_ucast_packets: 109397802 [3]: tx_bytes: 36034786682 [3]: tx_ucast_packets: 198238209 [4]: rx_bytes: 14884911384 [4]: rx_ucast_packets: 104081414 [4]: rx_discards: 5828 [4]: rx_csum_offload_errors: 1 [4]: tx_bytes: 35663361789 [4]: tx_ucast_packets: 194024824 [5]: rx_bytes: 16465075461 [5]: rx_ucast_packets: 110637200 [5]: tx_bytes: 43720432434 [5]: tx_ucast_packets: 202041894 [6]: rx_bytes: 16788706505 [6]: rx_ucast_packets: 113123182 [6]: tx_bytes: 38443961940 [6]: tx_ucast_packets: 202415075 [7]: rx_bytes: 16287423304 [7]: rx_ucast_packets: 110369475 [7]: rx_csum_offload_errors: 1 [7]: tx_bytes: 35104168638 [7]: tx_ucast_packets: 184905201 [8]: rx_bytes: 12689721791 [8]: rx_ucast_packets: 87616037 [8]: rx_discards: 2638 [8]: tx_bytes: 36133395431 [8]: tx_ucast_packets: 196547264 [9]: rx_bytes: 15007548011 [9]: rx_ucast_packets: 98183525 [9]: rx_csum_offload_errors: 1 [9]: tx_bytes: 34871314517 [9]: tx_ucast_packets: 188532637 [9]: tx_mcast_packets: 12 [10]: rx_bytes: 12112044826 [10]: rx_ucast_packets: 84335465 [10]: rx_discards: 2494 [10]: tx_bytes: 36562151913 [10]: tx_ucast_packets: 195658548 [11]: rx_bytes: 12873153712 [11]: rx_ucast_packets: 89305791 [11]: rx_discards: 2990 [11]: tx_bytes: 36348541675 [11]: tx_ucast_packets: 194155226 [12]: rx_bytes: 12768100958 [12]: rx_ucast_packets: 89350917 [12]: rx_discards: 2667 [12]: tx_bytes: 35730240389 [12]: tx_ucast_packets: 192254480 [13]: rx_bytes: 14533227468 [13]: rx_ucast_packets: 98139795 [13]: tx_bytes: 35954232494 [13]: tx_ucast_packets: 194573612 [13]: tx_bcast_packets: 2 [14]: rx_bytes: 13258647069 [14]: rx_ucast_packets: 92856762 [14]: rx_discards: 3509 [14]: rx_csum_offload_errors: 1 [14]: tx_bytes: 35663586641 [14]: tx_ucast_packets: 189661305 rx_bytes: 226125043936 rx_ucast_packets: 1536428109 rx_bcast_packets: 351 rx_discards: 20126 rx_filtered_packets: 8694 rx_csum_offload_errors: 11 tx_bytes: 548442367057 tx_ucast_packets: 2915571846 tx_mcast_packets: 12 tx_bcast_packets: 2 tx_64_byte_packets: 35417154 tx_65_to_127_byte_packets: 2006984660 tx_128_to_255_byte_packets: 373733514 tx_256_to_511_byte_packets: 378121090 tx_512_to_1023_byte_packets: 77643490 tx_1024_to_1522_byte_packets: 43669214 tx_pause_frames: 228 Some info about SACK: When to turn TCP SACK off?

    Read the article

  • Inverted schedctl usage in the JVM

    - by Dave
    The schedctl facility in Solaris allows a thread to request that the kernel defer involuntary preemption for a brief period. The mechanism is strictly advisory - the kernel can opt to ignore the request. Schedctl is typically used to bracket lock critical sections. That, in turn, can avoid convoying -- threads piling up on a critical section behind a preempted lock-holder -- and other lock-related performance pathologies. If you're interested see the man pages for schedctl_start() and schedctl_stop() and the schedctl.h include file. The implementation is very efficient. schedctl_start(), which asks that preemption be deferred, simply stores into a thread-specific structure -- the schedctl block -- that the kernel maps into user-space. Similarly, schedctl_stop() clears the flag set by schedctl_stop() and then checks a "preemption pending" flag in the block. Normally, this will be false, but if set schedctl_stop() will yield to politely grant the CPU to other threads. Note that you can't abuse this facility for long-term preemption avoidance as the deferral is brief. If your thread exceeds the grace period the kernel will preempt it and transiently degrade its effective scheduling priority. Further reading : US05937187 and various papers by Andy Tucker. We'll now switch topics to the implementation of the "synchronized" locking construct in the HotSpot JVM. If a lock is contended then on multiprocessor systems we'll spin briefly to try to avoid context switching. Context switching is wasted work and inflicts various cache and TLB penalties on the threads involved. If context switching were "free" then we'd never spin to avoid switching, but that's not the case. We use an adaptive spin-then-park strategy. One potentially undesirable outcome is that we can be preempted while spinning. When our spinning thread is finally rescheduled the lock may or may not be available. If not, we'll spin and then potentially park (block) again, thus suffering a 2nd context switch. Recall that the reason we spin is to avoid context switching. To avoid this scenario I've found it useful to enable schedctl to request deferral while spinning. But while spinning I've arranged for the code to periodically check or poll the "preemption pending" flag. If that's found set we simply abandon our spinning attempt and park immediately. This avoids the double context-switch scenario above. One annoyance is that the schedctl blocks for the threads in a given process are tightly packed on special pages mapped from kernel space into user-land. As such, writes to the schedctl blocks can cause false sharing on other adjacent blocks. Hopefully the kernel folks will make changes to avoid this by padding and aligning the blocks to ensure that one cache line underlies at most one schedctl block at any one time.

    Read the article

  • “Disk /dev/xvda1 doesn't contain a valid partition table”

    - by Simpanoz
    Iam newbie to EC2 and Ubuntu 11 (EC2 Free tier Ubuntu). I have made following commands. sudo mkfs -t ext4 /dev/xvdf6 sudo mkdir /db sudo vim /etc/fstab /dev/xvdf6 /db ext4 noatime,noexec,nodiratime 0 0 sudo mount /dev/xvdf6 /db fdisk -l I got following output. Can some one guide me what I am doing wrong and how it can be rectified. Disk /dev/xvda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table Disk /dev/xvdf6: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders, total 12582912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvdf6 doesn't contain a valid partition table.

    Read the article

  • My experience working with Teradata SQL Assistant

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/05/28/my-experience-working-with-teradata-sql-assistant.aspx To this date, I still haven't figure out how to "toggle" between my query windows. It seems like unless I click on that "new" button on top, whatever SQL I generate from right-click just overrides the current SQL in the window. I'm probably missing a "generate new sql in new window" setting The default Teradata SQL Assistant doesn't execute just the SQL query I highlighted. There is a setting I have to change first. I'm not really happy that the SQL assistant and SQL admin are different app. Still trying to get used to the fact that I can't quickly look up a table's keys/relationships while writing query. I have to switch between windows. LOVE the execution plan / explanation. I think that part is better done than MS SQL in some ways. The error messages can be better. I feel that Teradata .NET provider sends smaller query command over than others. I don't have any hard data to support my claim. One of my query in SSRS was passing multi-valued parameters to another query, and got error "Teradata 3577 row size or sort key size overflow". The search on this error says the solution is to cast result column into smaller data type, but I found that the problem was that the parameter passed into the where clause could not be too large. I wish Teradata SQL Assistant would remember the window size I just adjusted to. Every time I execute the query, the result set, query, and exec log auto re-adjust back to the default size. In SSMS, if I adjust the result set area to be smaller, it would stay like that if I execute query in the same window.

    Read the article

  • Can we decrease page swapping?

    - by Benjamin
    My system has a 5GB RAM. And my paging file size is 2GB Even though I have many RAM, page-swapping still occurs. But I don't want to that. I know how adjust the paging file size. If I resize the paging file size(ex 200MB?), Doesn't Windows System do any swapping? Are there side-effects?

    Read the article

  • Javascript autotab function not working on iPad or iPhone [migrated]

    - by freddy6
    I have this this piece of html code: <form name="postcode" method="post" onsubmit="return OnSubmitForm();"> <input class="postcode" maxlength="1" size="1" name="c" onKeyup="autotab(this, document.postcode.o)" /> <input class="postcode" maxlength="1" size="1" name="o" onKeyup="autotab(this, document.postcode.d)" /> <input class="postcode" maxlength="1" size="1" name="d" onKeyup="autotab(this, document.postcode.e)" /> <input class="postcode" maxlength="1" size="1" name="e" /> <br /> </form> which uses this javascript: <script> /* Auto tabbing script- By JavaScriptKit.com http://www.javascriptkit.com This credit MUST stay intact for use */ function autotab(original,destination){ if (original.getAttribute&&original.value.length==original.getAttribute("maxlength")) destination.focus() } </script><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script src="js/scripts.js" type="text/javascript"></script> <script type="text/javascript"> function OnSubmitForm() { if(document.postcode.operation[0].checked == true) { document.postcode.action ="plans.php"; } else if(document.postcode.operation[1].checked == true) { document.postcode.action ="plans_gas.php"; } else if(document.postcode.operation[2].checked == true) { document.postcode.action ="plans_duel.php"; } return true; } </script> As soon a you enter in one character into one of the text boxes it automatically tabs across the the next text box. This works fine on a pc or mac and on safari and also in all other browsers. But when viewing the webpage on an iPad or iPhone (using safari) the auto tabbing function does not work. Any ideas on how to make the auto tab work on these mobile devices?

    Read the article

  • Unable to create context rendering error when run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • GlusterFs - high load 90-107% CPU

    - by Sara
    I try and try and try to performance and fix problem with gluster, i try all. I served on gluster webpages, php files, images etc. I have problem after update from 3.3.0 to 3.3.1. I try 3.4 when i think maybe fix it but still the same problem. I temporarily have 1 brick, but before upgrade will be fine. Config: Volume Name: ... Type: Replicate Volume ID: ... Status: Started Number of Bricks: 0 x 2 = 1 Transport-type: tcp Bricks: Brick1: ...:/... Options Reconfigured: cluster.stripe-block-size: 128KB performance.cache-max-file-size: 100MB performance.flush-behind: on performance.io-thread-count: 16 performance.cache-size: 256MB auth.allow: ... performance.cache-refresh-timeout: 5 performance.write-behind-window-size: 1024MB I use fuse, hmm "Maybe the high load is due to the unavailable brick" i think about it, but i cant find information on how to safely change type of volume. Maybe u know how?

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • Unable to create context rendering error whet run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Mail server not sending or receiving after removal from barracuda blacklist to white list

    - by user137765
    Mail server not sending or receiving after removal from barracuda blacklist to white list. I've checked against black lists and the ip and domain are clean. 1and1 are saying its Barracuda black list and barracuda are saying its not blacklisted and that its somethign with 1and1 server. section from log file... Sep 20 04:29:25 vegaserve postfix/smtpd[16906]: connect from mta860.chtah.net[63.236.31.146] Sep 20 04:29:25 vegaserve postfix/smtpd[16070]: connect from host81-136-144-117.in-addr.btopenworld.com[81.136.144.117] Sep 20 04:29:27 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: raidon - short names not allowed from @ [201.80.253.153]ERR: 1348111767.185119 LOGOUT, [email protected], ip=[86.143.136.249], top=0, retr=0, time=151, rcvd=18, sent=283, maildir=/var/qmail/mailnames/mbelectrics.net/mb/Maildir Sep 20 04:29:28 vegaserve pop3d: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:28 vegaserve postfix/smtpd[15388]: connect from mta965.emails.itv.com[8.30.201.55] Sep 20 04:29:29 vegaserve postfix/smtpd[18194]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:29 vegaserve postfix/cleanup[24879]: 95CB31E87556C: message-id=<[email protected] Sep 20 04:29:29 vegaserve postfix/qmgr[14378]: 95CB31E87556C: from=, size=975, nrcpt=1 (queue active) Sep 20 04:29:29 vegaserve postfix/smtpd[18194]: disconnect from uspmta172097.emarsys.net[195.54.172.97] Sep 20 04:29:29 vegaserve postfix/smtp[25748]: 95CB31E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.05/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:29 vegaserve postfix/bounce[25897]: warning: 95CB31E87556C: undeliverable postmaster notification discarded Sep 20 04:29:29 vegaserve postfix/qmgr[14378]: 95CB31E87556C: removed Sep 20 04:29:32 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:37 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rei - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:38 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:38 vegaserve postfix/smtpd[19328]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/smtpd[18331]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/smtpd[24464]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/cleanup[24825]: BD1A71E87556C: message-id=<[email protected] Sep 20 04:29:40 vegaserve postfix/qmgr[14378]: BD1A71E87556C: from=, size=673, nrcpt=1 (queue active) Sep 20 04:29:40 vegaserve postfix/smtpd[24464]: disconnect from unknown[118.97.212.190] Sep 20 04:29:40 vegaserve postfix/smtp[25748]: BD1A71E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:40 vegaserve postfix/bounce[25995]: warning: BD1A71E87556C: undeliverable postmaster notification discarded Sep 20 04:29:40 vegaserve postfix/qmgr[14378]: BD1A71E87556C: removed Sep 20 04:29:41 vegaserve postfix/cleanup[24879]: 0A42B1E87556C: message-id=<[email protected] Sep 20 04:29:41 vegaserve postfix/qmgr[14378]: 0A42B1E87556C: from=, size=961, nrcpt=1 (queue active) Sep 20 04:29:41 vegaserve postfix/smtpd[18331]: disconnect from bay0-omc4-s10.bay0.hotmail.com[65.54.190.212] Sep 20 04:29:41 vegaserve postfix/smtp[25748]: 0A42B1E87556C: to=, orig_to=, relay=none, delay=0.03, delays=0.03/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:41 vegaserve postfix/bounce[25897]: warning: 0A42B1E87556C: undeliverable postmaster notification discarded Sep 20 04:29:41 vegaserve postfix/qmgr[14378]: 0A42B1E87556C: removed Sep 20 04:29:43 vegaserve postfix/smtpd[17511]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:43 vegaserve postfix/cleanup[24825]: 8F8991E87556C: message-id=<[email protected] Sep 20 04:29:43 vegaserve postfix/qmgr[14378]: 8F8991E87556C: from=, size=946, nrcpt=1 (queue active) Sep 20 04:29:43 vegaserve postfix/smtpd[17511]: disconnect from blu0-omc4-s22.blu0.hotmail.com[65.55.111.161] Sep 20 04:29:43 vegaserve postfix/smtp[25748]: 8F8991E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.02/0/0.02/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:43 vegaserve postfix/bounce[25995]: warning: 8F8991E87556C: undeliverable postmaster notification discarded Sep 20 04:29:43 vegaserve postfix/qmgr[14378]: 8F8991E87556C: removed Sep 20 04:29:44 vegaserve postfix/cleanup[24879]: 088641E87556C: message-id=<[email protected] Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 088641E87556C: from=, size=1078, nrcpt=1 (queue active) Sep 20 04:29:44 vegaserve postfix/smtpd[19328]: disconnect from smtp10.bis7.eu.blackberry.com[178.239.85.15] Sep 20 04:29:44 vegaserve postfix/smtp[25748]: 088641E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.03/0/0.01/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:44 vegaserve postfix/bounce[25995]: warning: 088641E87556C: undeliverable postmaster notification discarded Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 088641E87556C: removed Sep 20 04:29:44 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rin - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:44 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:44 vegaserve postfix/smtpd[18965]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:44 vegaserve postfix/cleanup[24825]: 946F51E87556C: message-id=<[email protected] Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 946F51E87556C: from=, size=1173, nrcpt=1 (queue active) Sep 20 04:29:44 vegaserve postfix/smtpd[18965]: disconnect from hubrelay-rd.bt.com[62.239.224.99] Sep 20 04:29:44 vegaserve postfix/smtp[25748]: 946F51E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:44 vegaserve postfix/bounce[25897]: warning: 946F51E87556C: undeliverable postmaster notification discarded Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 946F51E87556C: removed Sep 20 04:29:45 vegaserve postfix/smtpd[14816]: connect from col0-omc2-s12.col0.hotmail.com[65.55.34.86] Sep 20 04:29:47 vegaserve postfix/smtpd[16900]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:47 vegaserve postfix/cleanup[24879]: 961721E87556C: message-id=<[email protected] Sep 20 04:29:47 vegaserve postfix/qmgr[14378]: 961721E87556C: from=, size=1082, nrcpt=1 (queue active) Sep 20 04:29:47 vegaserve postfix/smtpd[16900]: disconnect from mta-35d2.livingsocial.com[199.91.53.210] Sep 20 04:29:47 vegaserve postfix/smtp[25748]: 961721E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:47 vegaserve postfix/bounce[25995]: warning: 961721E87556C: undeliverable postmaster notification discarded Sep 20 04:29:47 vegaserve postfix/qmgr[14378]: 961721E87556C: removed Sep 20 04:29:50 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rini - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:50 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:52 vegaserve postfix/smtpd[24478]: connect from col0-omc2-s13.col0.hotmail.com[65.55.34.87] Sep 20 04:29:52 vegaserve postfix/smtpd[18923]: connect from www.idbwplan.com[193.181.254.21] Sep 20 04:29:55 vegaserve postfix/smtpd[15968]: connect from 105-48.mta.dotmailer.com[94.143.105.48] Sep 20 04:29:56 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: ringo - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:56 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:30:00 vegaserve postfix/smtpd[18772]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:30:01 vegaserve postfix/cleanup[24825]: 1DAD71E87556C: message-id=<[email protected] Sep 20 04:30:01 vegaserve postfix/qmgr[14378]: 1DAD71E87556C: from=, size=1022, nrcpt=1 (queue active) Sep 20 04:30:01 vegaserve postfix/smtpd[18772]: disconnect from mail95.us2.mcsv.net[173.231.139.95] Sep 20 04:30:01 vegaserve postfix/smtp[25748]: 1DAD71E87556C: to=, orig_to=, relay=none, delay=0.06, delays=0.05/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:30:01 vegaserve postfix/bounce[25897]: warning: 1DAD71E87556C: undeliverable postmaster notification discarded Sep 20 04:30:01 vegaserve postfix/qmgr[14378]: 1DAD71E87556C: removed Sep 20 04:30:02 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: ritsuko - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:30:02 vegaserve postfix/smtpd[16911]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:30:02 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:30:02 vegaserve postfix/cleanup[24879]: 8AADD1E87556C: message-id=<[email protected] Sep 20 04:30:02 vegaserve postfix/qmgr[14378]: 8AADD1E87556C: from=, size=1003, nrcpt=1 (queue active) Sep 20 04:30:02 vegaserve postfix/smtpd[16911]: disconnect from mr133.createsend.com[184.106.86.133] Sep 20 04:30:02 vegaserve postfix/smtp[25748]: 8AADD1E87556C: to=, orig_to=, relay=none, delay=0.02, delays=0.02/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself)

    Read the article

  • How can I simulate blocking RTMP over port 80 on Windows?

    - by Christian Nunciato
    It seems like this should be so simple, but since this isn't my area of expertise, I'm having a hell of a time figuring out how to do it. Basically, I have a Flash app and I'm connecting to a Flash Media Server to stream some content. The URL I'm using to do this, for example, looks like this: rtmp://someserver.com/some/path/mp3:somefile Everything works -- but that's sort of the problem. When I'm trying to do is simulate my users attempting to play back my media under more restrictive conditions than the ones I have here (i.e., none) -- namely being stuck behind firewalls or proxy servers that block access to RTMP streams. Flash, according to Adobe, is equipped to handle proxy servers and firewalls automatically, like so (from the docs): When you do not specify a port number in an RTMP address, Flash will attempt to connect to port 1935. If it fails it will then try to connect to port 443; if that fails, it will try port 80. [And if that fails, it will attempt to connect via RTMPT (i.e., HTTP tunneling) on port 80.] So no coding is required to access ports 1935, 443, or port 80 if you do not specify a port in the RTMP address. The problem I'm having is setting up a reliable environment in which to test that this behavior actually happens. I'm on a Windows machine, for example, so with Windows Firewall, I can block certain ports and protocols (1935, 443), but I don't want to block port 80, because the final fallback protocol (RTMPT) is supposed to run on port 80, and Windows Firewall only gives me enough granularity (as far as I know, anyway) to block "all outbound TCP traffic to remote port 80" -- that is, I can't, apparently, block "all outbound RTMP traffic to port 80" while leaving RTMPT traffic to port 80 unaffected. My understanding thus far is that I'll probably need to set up a proxy server to do this. Is this correct? Or is there a simpler way (on Win 7, at least) to filter out RTMP to 1935, RTMP to 443, RTMP to 80, but still allow RTMPT to 80 (where all four hostnames are identical)? And if I do have to set up a proxy server, what's the simplest way to go on Windows? I've set up WinProxy, which seems a bit janky but apparently works -- but then what I can't figure out is how to tell Windows to force all TCP traffic (including RTMP, RTMPT and HTTO) through this proxy server so I can turn around and reject the requests for RTMP. Any help would be hugely appreciated. This isn't my realm of expertise and I've alreasdy spent more time on it than I probably should. :)

    Read the article

  • limits.conf to set memory limits

    - by Rupert Jipe
    I would like to limit any process from using more than 500 MB of RAM. AFAIK this is done using RSS in /etc/security/limits.conf but the process called gnome-panel apparently is using 618436 kB of VmRSS. How can this be ? /etc/security/limits.conf * hard rss 512000 username@debian:~$ cat /proc/3002/status Name: gnome-panel State: S (sleeping) Tgid: 3002 Pid: 3002 PPid: 2910 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 20 24 25 29 44 46 112 116 117 1000 1002 1003 VmPeak: 916636 kB VmSize: 916636 kB VmLck: 0 kB VmHWM: 618436 kB VmRSS: 618436 kB VmData: 601972 kB VmStk: 104 kB VmExe: 516 kB VmLib: 29232 kB VmPTE: 1760 kB Threads: 1 SigQ: 0/14001 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000020001000 SigCgt: 0000000180000000 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: 3 Cpus_allowed_list: 0-1 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 871965 nonvoluntary_ctxt_switches: 47553 PaX: PeMRs username@debian:~$ cat /proc/3002/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Missing Operating System after trying to upgrade to Ubuntu 11

    - by Mauricio
    there! After trying to upgrade from Ubuntu 10.04 to 11, the upgrading process stopped when running and then I got an "out of disk, grub rescue" message when booting. After running Boot Repair, I got this results. Now I get "Missing Operating System" whent trying to boot. Bellow I show some results from some commands I gather from help foruns, but I still reached no solution. Could you please help me? Any enlightment will be very helpful! Disk Utility says "Disk has a few bad sectors". When trying to run the Self-test I get "FAILED (Read)" Here we have what Gparted says about the /dev/sda1 partition (ext4): Flags: boot Status: not mounted Warning: e2label: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda1Couldn`t find valid filesystem superblockUnable to read the contents of this filesystem! From sudo fdisk -lI got: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000e0596 Device Boot Start End Blocks Id System/dev/sda1 * 2048 607428607 303713280 83 Linux/dev/sda2 607430654 625141759 8855553 5 Extended/dev/sda5 607430656 625141759 8855552 82 Linux swap / SolarisDisk /dev/sdb: 320.1 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c3c41 Device Boot Start End Blocks Id System /dev/sdb1 * 63 625137344 312568641 c W95 FAT32 (LBA) " and fromsudo fdisk /dev/sda1I got fdisk: unable to read /dev/sda1: Inappropriate ioctl for device` From sudo mount /dev/sda1 /mntI got: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so From sudo update-grubI got: error: cannot read from `/dev/sda'. /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?).

    Read the article

  • My processor is not detected intel core 2 duo

    - by walid
    My processor is not detected intel core 2 duo When I type $uname -m -p I get this i686 unknown I have Ubuntu 10.10 netbook remix but the cat /proc/cpuinfo gives right identification of two processors as below processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.53 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: The problem is with programs that uses more than one core like virtualbox and bitcoin which refuses to use more than one core Is there anythign wrong or anything that I can do? My installation is from a live usb iso on a USB

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >