Search Results

Search found 6311 results on 253 pages for 'limit clause'.

Page 227/253 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • Nginx and PHP-FPM running out of connections

    - by E3pO
    I keep running into errors like these, [02-Jun-2012 01:52:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 19 idle, and 49 total children [02-Jun-2012 01:52:05] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 19 idle, and 50 total children [02-Jun-2012 01:52:06] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 19 idle, and 51 total children [02-Jun-2012 03:10:51] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 18 idle, and 91 total children I changed my settings for php-fpm to these, pm.max_children = 150 (It was at 100, i got a max_children reached and upped to 150) pm.start_servers = 75 pm.min_spare_servers = 20 pm.max_spare_servers = 150 Resulting in [02-Jun-2012 01:39:19] WARNING: [pool www] server reached pm.max_children setting (150), consider raising it I've just launched a new website that is getting a conciderable amount of traffic on it. This traffic is legitimate and users are getting 504 gateway timeouts when the limit is reached. I have limited connections to my server with IPTABLES and I'm running fail2ban and keeping track of nginx access logs. The traffic is all legitimate, i'm just running out of room for users. I'm currently running on a dual core box with ubuntu 64bit. free total used free shared buffers cached Mem: 6114284 5726984 387300 0 141612 4985384 -/+ buffers/cache: 599988 5514296 Swap: 524284 5804 518480 My php.ini max_input_time = 60 My nginx config is worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 19000; # multi_accept on; } worker_rlimit_nofile 20000; #each connection needs a filehandle (or 2 if you are proxying) client_max_body_size 30M; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; location ~ \.php$ { try_files $uri /er/error.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } What can I do to stop running out of connections? Why does this keep occurring? I'm monitoring my traffic on Google Analytics realtime and when the user count gets above about 120 my php-fpm.log is full of these warnings..

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • DPM 2010 "Disk failed or disk not found"

    - by SysAdmin
    I have an HP Proliant ML110 G5 server with Windows server 2008R2 only dedicated for DPM 2010. This server has a limit in HD of 8TB which has already been met. I'm now stuck in this situation where my disk keeps failing "Disk failed or Disk not found" in the disk management. Only after I reboot the system the disk comes back up. Today I was running my monthly tape backup on a certain protection group and the disk failed again while the tape job was running (so the job wasn't completed). This is the description of the error in the alerts: "The disk Disk 1 - Hitachi HDS722020ALA330 SCSI Disk Device cannot be detected or has stopped responding. All subsequent protection activities that use this disk will fail until the disk is brought back online. (ID 3120)". My backup system is becoming useless! I don't think that is a hardware issue (please correct me if I'm wrong) since the HD works fine for a certain period of time which is becoming shorter and shorter. I basically have no more option to fix this problem. I tried to fix any error that was coming up in the event viewer with no luck (included one regarding the SQL2008 compatibility issue). The disk keeps failing! Now I'm only trying to recover/migrate the data from the disk that is having problem but my issue now is that I cannot add any drives to my server since I already got installed the maximum storage capacity 8TB. I thought about 2 simple options. Please tell me what you guys think about it; Unplug one of the 2 storage pool disks (disk0, that one without problem) from the machine and install a new one in order to migrate the data with the Migration tool for DPM. Remove the defective disk (disk1), put back the disk0 and run the synchronization/consistency check on all the groups to recreate replicas and recovery points. Run diskpart.exe and clean up the disk (loosing all data) and hoping that he will work after I sync all the protection groups. Both solutions are not elegant but I have no better options at the moment. Please I need some help. Thanks for your time Angelo

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Why is my cron daemon is being killed every few minutes? OpenVZ?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({56, 0}, 0x7fffa7184c80) = 0 stat("crontabs", {st_mode=S_IFDIR|S_ISVTX|0730, st_size=4096, ...}) = 0 stat("/etc/crontab", {st_mode=S_IFREG|0644, st_size=1100, ...}) = 0 stat("/etc/cron.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 stat("/etc/cron.d/php5", {st_mode=S_IFREG|0644, st_size=475, ...}) = 0 stat("/etc/cron.d/anacron", {st_mode=S_IFREG|0644, st_size=244, ...}) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {0x4036f0, [CHLD], SA_RESTORER|SA_RESTART, 0x2b0e8465f230}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has about 500 MB of free memory, and cat /proc/loadavg returns 0.10 0.21 0.45 so resources shouldn't be the issue. I also tried removing and reinstalling the cron package using apt-get. What else should I check? How do I find out what's killing my crond? Edit: I'm on a virtual machine under OpenVZ (and as such, I have no swap). With cron running, free -m reports: total used free shared buffers cached Mem: 1024 465 558 0 0 0 -/+ buffers/cache: 465 558 Swap: 0 0 0 My OpenVZ User Beancounters via cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 172087: kmemsize 8275718 25561636 51200000 51200000 0 lockedpages 0 968 2048 2048 0 privvmpages 113442 266465 262200 262200 3740757 shmpages 788 4004 128000 128000 0 dummy 0 0 0 0 0 numproc 39 98 600 600 0 physpages 50521 208434 0 9223372036854775807 0 vmguarpages 0 0 512000 512000 0 oomguarpages 50521 208447 512000 512000 0 numtcpsock 7 323 4096 4096 0 numflock 7 64 2048 2048 0 numpty 1 4 32 32 0 numsiginfo 0 23 1024 1024 0 tcpsndbuf 137984 17878480 20480000 20480000 0 tcprcvbuf 114688 6983504 20480000 20480000 0 othersockbuf 162960 1074440 20480000 20480000 0 dgramrcvbuf 0 24208 10240000 10240000 0 numothersock 101 353 2048 2048 0 dcachesize 459171 747444 10240000 10240000 0 numfile 1010 4221 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 39 424 2048 2048 0

    Read the article

  • Tutorial for configuring OpenVPN [on hold]

    - by user2699451
    I have been through 10+ tutorials on setting up a OpenVPN, and each tutorial gives a different problem... Does anyone know of a decent and helpful website/tutorial which I could go to to get it set up? I have been battling through it for almost 2 months now. Yes, I have also bugged forums.openvpn, but I think I have "reached my post limit" with them. I have to configure it remotely via ssh. UPDATE: okay, I have been asked to be more clear on the topic I followed this tutorial (as a example) - http://www.servermom.com/how-to-build-openvpn-server-on-centos-6-x/732/ I had no issues setting up, etc. except when I boot into windows and run the OpenVPN GUI Client, it connects and gives this error: WARNING: Bad encapsulated packet length from peer (21331), which must be 0 and <= 1576 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition could also indicate a possible active attack on the TCP link -- [Attemping restart...] Here is my server config: port 1194 #- port proto udp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 reneg-sec 0 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /usr/lib64/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Co$ #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment$ client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 5 30 comp-lzo persist-key persist-tun status 1194.log verb 3 and my client config: client dev tun proto udp remote [server ip] 1194 # - Your server IP and OpenVPN Port resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key persist-tun ca ca.crt auth-user-pass comp-lzo reneg-sec 0 verb 3 OpenVPN Client Log: Thu Oct 31 11:51:29 2013 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Thu Oct 31 11:51:44 2013 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Thu Oct 31 11:51:44 2013 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Thu Oct 31 11:51:44 2013 LZO compression initialized Thu Oct 31 11:51:44 2013 Control Channel MTU parms [ L:1576 D:140 EF:40 EB:0 ET:0 EL:0 ] Thu Oct 31 11:51:44 2013 Data Channel MTU parms [ L:1576 D:1450 EF:44 EB:135 ET:32 EL:0 AF:3/1 ] Thu Oct 31 11:51:44 2013 Local Options hash (VER=V4): '2547efd2' Thu Oct 31 11:51:44 2013 Expected Remote Options hash (VER=V4): '77cf0943' Thu Oct 31 11:51:44 2013 Attempting to establish TCP connection with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCP connection established with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link local: [undef] Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link remote: x.x.x.x:1194 // after this it just hangs, nothing happens So I dont know what I am doing wrong but I am getting a bit impatient and on each forum I post this, I get stupid/unrelated/unhelpful answers...

    Read the article

  • Optical SPDIF audio from motherboard not working with receiver

    - by simon b
    Hi, I hope someone can help; I can't get my SPDIF optical out working through my receiver and all the responses I can see on the web assume you have a sound card, while I settled for the (seemingly high end) sound on my motherboard (Asus P7P55D-E PRO), which appears to limit some of my options. My set-up is a "new out of the box" one and is: *Windows 7 PC (using PowerDVD10 for DVDs/Blurays and Windows media player for music) *Asus P7P55D-E PRO motherboard - has 8-channel audio TRS jacks and SPDIF optical and coaxial out *An old Yamaha receiver, whose only multi-channel input options are optical in and 6 channel RCA in. However, it still can handle DTS and DD *Boston Acoustic Soundware XS 5.1 speakers I've currently got the SPDIF optical out from the motherboard connected to the in on my receiver, have SPDIF enabled in the sound menu and the light is glowing red down the fibre. But I'm getting no sound at all. What I want is to be able to play DVDs/BluRays in 5.1 but also to be able to play music in multi-channel mode (even though I know this will be "fake" multichannel; it's more about where I sit in the room and my requirement to use the sub because the Boston is a satellite/sub set-up) My questions are: *Will optical work at all for multi-channel? THe latest posts I can see suggest it does but some people seem to say optical only outputs stereo. Whom to believe? *Even if it does work, I've read that I have to disable AC-3 decoding, or make various other changes, which don't seem to be possible without the menu options that a sound-card brings. Is the motherboard-only option just too inflexible? *Although my SPDIF device is enabled in the sound menu, it insists under "Jack information" that it is a "rear panel RCA jack", when of course it is not (both TOSLINK and rCA jacks do exist). Has the PC just forgotten that it has an optical? *I think I could relatively easily connect the 8-channel 3.5mm TRS jacks to my receiver 6-ch input jacks by way of TRS/RCA cables, but would that not stop me from being able to play music from media-player in multi-channel mode, as I'm not sure the motherboard can cope *Or do I need to bite the bullet and buy a sound-card? And if so, how can I be sure the one I get doesn't have the same problem? Any thoughts gratefully received, Cheers, simon

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • How can I prevent an unintentional DDOS running ColdFusion 8 with IIS 6?

    - by Eric Belair
    We had an interesting outage today on one of our client's websites. Out of nowhere, the website was inaccessible. The website runs by itself on a dedicated physical Windows 2000 server (probably overkill, I know, but that's a discussion for a different day). After restarting IIS and ColdFusion Application Service, the problem came back several times. My initial thought was that it was a DNS issue, which happens occasionally - the last time it happened was after Hurricane Sandy when we our ISP was out, and we had to make some network config changes. But, it was not a DNS issue. My second thought was that it was a DDOS attack, but, there's very little reason anyone would want to take this site down. When we called our ISP, the operator on the other end noted that traffic was spiking significantly. As it turned out, the client had unintentionally caused a DDOS on the website, after they FTPed a very large video file, and then mass emailed a link to it. Hundreds of people clicked the link and brought the site to its knees. I am primarily a Website Programmer, but I often have to contribute to server administration at times. Sadly, I'm the resident ColdFusion and IIS expert, but I don't have a lot of experience with this issue. What are some basic steps that I can take to prevent this from happening in the future, since we cannot always control what files the client posts to the website. Here are some ideas I had, but I'm unsure of the impact: Limit the number of connections in IIS. Put media files on a separate server (like an Amazon site, etc.). File requests of this type currently behind a server-script (i.e. /www.site.com/viewFile.cfm?fileId=1424545, where the fileId references a file off the webroot) that logs requests, and pushes the file to the browser using CFCONTENT. I could edit this script to reject requests when they exceed a certain amount in a given time-frame (i.e. a 5MB can be accessed globally 10 times in an hour). This may cause some users frustration, but, if hundreds of users are attempting to view the file, the site is going to crash anyways, as it did today, which is way more frustrating, since there is no "pretty" message explaining why they can't get to the file. I'm open to any suggestions, as I'm continuing my research to report to the CTO with the best options, so that we can put a solution into effect. Thank you.

    Read the article

  • Win XP error 0x80041003 using GetObject/winmgmts

    - by John Lewis
    My computer is called "neil" and I want to set some values using WMI in vbScript. I adapetd the script below from one supplied by Microsoft. When I run it in my browser I get Error Type: (0x80041003) /dressage/30/pdf2.asp, line 8 I suspect it is some registry/security setting. Any advice? John Lewis FULL SCRIPT call Print_HTML_Page("http://neil/dressage/ascii.asp", "ascii") Sub SetPDFFile(strPDFFile) Const HKEY_LOCAL_MACHINE = &H80000002 strKeyPath = "SOFTWARE\Dane Prairie Systems\Win2PDF" strComputer = "." Set objReg=GetObject( _ "winmgmts:{impersonationLevel=impersonate}!\\" & _ strComputer & "\root\default:StdRegProv") strValueName = "PDFFileName" objReg.SetExpandedStringValue HKEY_LOCAL_MACHINE,_ strKeyPath,strValueName,strPDFFile End Sub Sub Print_HTML_Page(strPathToPage, strPDFFile) SetPDFFile( strPDFFile ) Set objIE = CreateObject("InternetExplorer.Application") 'From http://www.tek-tips.com/viewthread.cfm?qid=1092473&page=5 On Error Resume Next strPrintStatus = objIE.QueryStatusWB(6) If Err.Number 0 Then MsgBox "Cannot find a printer. Operation aborted." objIE.Quit Set objIE = Nothing Exit Sub End If With objIE .visible=0 .left=200 .top=200 .height=400 .width=400 .menubar=0 .toolbar=1 .statusBar=0 .navigate strPathToPage End With 'Wait until IE has finished loading Do while objIE.busy WScript.Sleep 100 Loop On Error Goto 0 objIE.ExecWB 6,2 'Wait until IE has finished printing WScript.Sleep 2000 objIE.Quit Set objIE = Nothing End Sub Update: Thanks for your reply. The line breaks seem to have been introduced in the process of paasting into this form. Well spotted - I was using a PDF file name "ascii". I added a .pdf extension but still get the error. I suspect you're right that it's to do with admin rights. Here's more about the setup and what I'm trying to achieve. Win2pdf is a product for writing PDFs by works by simulating a Windows printer. You "print" the page, select win2pdf in the print dialog and it then asks for a file name. I have it installed on my pc (called Neil) and it works fine in this conventional way. My aim is to write an html page to a PDF file using win2pdf - but via ASP/vbscript/javascript rather than with manual intervention. The script for doing this was provided by win2PDF's tech support but when it did not work, that was the limit of their understanding. In the sample script the file ascii.asp just produces a table of ascii codes/characters. The URL given is on my own PC which has IIS set up to run scripts which it does fine. The error I get occurs on about the fourth line executed. I am logged in with full admin rights - I think! But I'm no expert. I hope this helps to give some more specific suggestions about how to check/fix the admin rights.

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • How to enable caching on Apache / Ubuntu Linux?

    - by Jim Mischel
    I have a large (several megabytes) XML file that's updated rather frequently (every 10 minutes or less) and gets a lot of traffic. I'd like to implement some caching to reduce bandwidth and server load. Looking at the Apache documents, I see a dizzying array of configuration options that involve various combinations of mod_expires, mod_headers, and mod_cache (and variants). I end up running in circles and the results aren't what I expect. I'm comfortable editing the various configuration files if I have some idea what I'm supposed to change. But at the moment I'm poking around in the dark and that's never a comfortable feeling. So, perhaps if I describe what I want, somebody here can take me by the hand and say, "This is what you need to do." Periodically, this file, call it "stuff.xml" is updated and a new version copied to the directory. The external url would be, for example, http://example.com/stuff.xml. Understand, this part works. Whenever I request the file, I get the expected result. But the file is big and I want to save bandwidth, so first I'd like to implement conditional GET semantics with the If-Modified-Since header. How do I do this? I've enabled mod_headers and mod_expired and added the <FilesMatching> section in my httpd.conf as recommended in countless examples I've seen online, but that didn't change the behavior when made a conditional GET request. I always get a status 200 with the entire document. So how the heck do I implement this? That'll cut down on neeless transfers. I'd also like to limit the amount of data transferred. Seeing as this is XML, gzipping it should save me 50% or more. My next step would be to somehow gzip the file and, if it's not too difficult, store it in memory. That'll cut down on per-access data transfer, and also reduce disk transfers. So how do I implement this type of caching? Thanks in advance.

    Read the article

  • Hyper-V causes boot loop/failure on a non-Gigabyte Win8 Pro system

    - by Nick
    Hardware: Intel i7 2600K (not overclocked, SLAT compatible, virt. features enabled in bios) Asus Maximus IV Extreme-Z (Z68) 16Gb RAM 256Gb SSD Other non-trivial working parts Adding Hyper-V is causing a boot loop resulting in an attempt at automatic repair by Windows 8 after the second or third loop: I'm trying to get the Windows Phone 8 SDK installed and I've narrowed down my troubles to the Hyper-V feature in Win8. This is required to run the WP8 emulator and there are no install options to omit this feature. My first attempt completely borked the OS as I did not have a recent restore point or system image, so I did a completely clean install and made plenty of backups/restore points. I skipped the SDK install and went straight for the windows feature add-on for Hyper-V. This confirmed that Hyper-V is the issue as the same behavior resulted. I cannot find any hint in the Event Logs. Cancelling automatic recovery causes the same behavior to repeat. I don't have any other VM products installed. My only recourse is to use a restore point, try something else, install it again, and see what happens. No luck so far. I'm on my 10th attempt here. Any help would be much appreciated. EDIT: I found a collection of tips here.. http://social.msdn.microsoft.com/Forums/en-US/wptools/thread/b06cc9f2-aa5e-4cb3-9df1-0c273e1dfd68 So i've been attempting various bios settings to resolve this issue with no luck. I've tried setting 'CPUID Limit' to disabled. This seems to work partly as Win8 boots but no USB devices work at all. I also attempted disabling the usb 3.0 controller as the msdn topic lists an issue with USB controllers on Gigabyte boards. This also doesn't work. The USB devices light up but no input is received by the OS. All of my other bios CPU settings are in line with the info in the post. I'm totally stumped. Bios screenshots: http://i.imgur.com/yKN5u.png http://i.imgur.com/Y9wI4.png http://i.imgur.com/F6EuO.png

    Read the article

  • init never reaping zombie/defunct processes

    - by st9
    Hi, On my Fedora Core 9 webserver with kernel 2.6.18.8, init isn't reaping zombie processes. This would be bearable if it wasn't for the process table eventually reaching an upper limit where no new processes can be allocated. Sample output of ps -el | grep 'Z': F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 5 Z 0 2648 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 51 2656 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 2670 1 0 75 0 - 0 exit ? 00:00:02 crond <defunct> 4 Z 0 2874 1 0 82 0 - 0 exit ? 00:00:00 mysqld_safe <defunct> 5 Z 0 28104 1 0 76 0 - 0 exit ? 00:00:00 httpd <defunct> 5 Z 0 28716 1 0 76 0 - 0 exit ? 00:00:06 lfd <defunct> 5 Z 74 10172 1 0 75 0 - 0 exit ? 00:00:00 sshd <defunct> 5 Z 0 11199 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11202 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11205 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11208 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11211 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11240 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11246 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11249 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11252 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 14106 1 0 80 0 - 0 exit ? 00:00:00 anacron <defunct> 5 Z 0 14631 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> Is this an OS bug? misconfiguration? I'm looking for inspiration as to the source of this problem. Thanks

    Read the article

  • PPTP VPN Not Working - Peer failed CHAP authentication, PTY read or GRE write failed

    - by armani
    Brand-new install of CentOS 6.3. Followed this guide: http://www.members.optushome.com.au/~wskwok/poptop_ads_howto_1.htm And I got PPTPd running [v1.3.4]. I got the VPN to authenticate users against our Active Directory using winbind, smb, etc. All my tests to see if I'm still authenticated to the AD server pass ["kinit -V [email protected]", "smbclient", "wbinfo -t"]. VPN users were able to connect for like . . . an hour. I tried connecting from my Android phone using domain credentials and saw that I got an IP allocated for internal VPN users [which I've since changed the range, but even setting it back to the initial doesn't work]. Ever since then, no matter what settings I try, I pretty much consistently get this in my /var/log/messages [and the VPN client fails]: [root@vpn2 ~]# tail /var/log/messages Aug 31 15:57:22 vpn2 pppd[18386]: pppd 2.4.5 started by root, uid 0 Aug 31 15:57:22 vpn2 pppd[18386]: Using interface ppp0 Aug 31 15:57:22 vpn2 pppd[18386]: Connect: ppp0 <--> /dev/pts/1 Aug 31 15:57:22 vpn2 pptpd[18385]: GRE: Bad checksum from pppd. Aug 31 15:57:24 vpn2 pppd[18386]: Peer armaniadm failed CHAP authentication Aug 31 15:57:24 vpn2 pppd[18386]: Connection terminated. Aug 31 15:57:24 vpn2 pppd[18386]: Exit. Aug 31 15:57:24 vpn2 pptpd[18385]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: Client 208.54.86.242 control connection finished Now before you go blaming the firewall [all other forum posts I find seem to go there], this VPN server is on our DMZ network. We're using a Juniper SSG-5 Gateway, and I've assigned a WAN IP to the VPN box itself, zoned into the DMZ zone. Then, I have full "Any IP / Any Protocol" open traffic rules between DMZ<--Untrust Zone, and DMZ<--Trust Zone. I'll limit this later to just the authenticating traffic it needs, but for now I think we can rule out the firewall blocking anything. Here's my /etc/pptpd.conf [omitting comments]: option /etc/ppp/options.pptpd logwtmp localip [EXTERNAL_IP_ADDRESS] remoteip [ANOTHER_EXTERNAL_IP_ADDRESS, AND HAVE TRIED AN ARBITRARY GROUP LIKE 5.5.0.0-100] Here's my /etc/ppp/options.pptpd.conf [omitting comments]: name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 192.168.200.42 # This is our internal domain controller ms-wins 192.168.200.42 proxyarp lock nobsdcomp novj novjccomp nologfd auth nodefaultroute plugin winbind.so ntlm_auth-helper "/usr/bin/ntlm_auth --helper-protocol=ntlm-server-1" Any help is GREATLY appreciated. I can give you any more info you need to know, and it's a new test server, so I can perform any tests/reboots required to get it up and going. Thanks a ton.

    Read the article

  • Server currently under DDOS, not sure what to do.

    - by Volex
    Hi, My web server is currently under a DDOS attack I believe, the messages log is full of these kind of messages: May 13 15:51:19 kernel: nf_conntrack: table full, dropping packet. May 13 15:51:19 last message repeated 9 times May 13 15:51:24 kernel: __ratelimit: 78 callbacks suppressed May 13 15:51:24 kernel: nf_conntrack: table full, dropping packet. May 13 15:52:06 kernel: possible SYN flooding on port 80. Sending cookies. and a netstat has a huge amount of the following: tcp 0 0 my.host.com:http bb176da0.virtua.com.br:4998 SYN_RECV tcp 0 0 my.host.com:http 187.0.43.109:2694 SYN_RECV tcp 0 0 my.host.com:http 109.229.4.145:1722 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63267 SYN_RECV tcp 0 0 my.host.com:http bd66839d.virtua.com.br:3469 SYN_RECV tcp 0 0 my.host.com:http 69.101.56.190.dsl.int:52552 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:2262 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63418 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:1741 SYN_RECV tcp 0 0 my.host.com:http zaq3d739320.zaq.ne.jp:2141 SYN_RECV tcp 0 0 my.host.com:http netacc-gpn-4-80-73.po:52676 SYN_RECV tcpdump shows: 7:11:08.564510 IP 187-4-1xx-4.xxx.ipd.brasiltelecom.net.br.54821 my.host.com.http: S 999692166:999692166(0) win 65535 17:11:08.566347 IP 114-44-171-67.dynamic.hinet.net.1129 my.host.com.http: S 605369055:605369055(0) win 65535 17:11:08.570210 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5590 my.host.com.http: S 2813379182:2813379182(0) win 16384 17:11:08.571290 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1615 my.host.com.http: S 281542700:281542700(0) win 65535 17:11:08.583847 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1617 my.host.com.http: S 499413892:499413892(0) win 65535 17:11:08.588680 IP 170.51.229.112.2569 my.host.com.http: S 2195084898:2195084898(0) win 65535 17:11:08.588773 IP gw2-1.211.ru.3180 my.host.com.http: F 2315901786:2315901786(0) ack 2620913033 win 64240 17:11:08.590656 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5614 my.host.com.http: S 2813715032:2813715032(0) win 16384 17:11:08.591212 IP 203.82.82.54.15848 my.host.com.http: S 4070423507:4070423507(0) win 16384 17:11:08.591254 IP 203.82.82.54.2545 my.host.com.http: S 1790910784:1790910784(0) win 16384 17:11:08.591289 IP 203.82.82.54.28306 my.host.com.http: S 578615626:578615626(0) win 16384 17:11:08.591591 IP gw2-1.211.ru.3191 my.host.com.http: F 2316435991:2316435991(0) ack 2634205972 win 64240 17:11:08.591790 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5593 my.host.com.http: S 2813659017:2813659017(0) win 16384 17:11:08.593691 IP gw2-1.211.ru.3203 my.host.com.http: F 2316834420:2316834420(0) ack 2629074987 win 64240 I'm not sure what I can do to limit/mitigate this, currently no webpages are being served, any help gratefully appreciated.

    Read the article

  • what is Remote Desktop Services in Windows Server 2008 R2 all about?

    - by fejesjoco
    Seriously, I'm lost in all that sales mumbo-jumbo. Let's say I want 1 or 2 users to be able to remotely log on to a server, run Word, Visual Studio, Firefox, and whatever. Do I gain anything at all if I install Remote Desktop Services? Or do I just install Desktop Experience feature pack, enable remote desktop and voila, nobody will ever notice the difference? Here's what TechNet says about Remote Desktop Session Host: A Remote Desktop Session Host (RD Session Host) server is the server that hosts Windows-based programs or the full Windows desktop for Remote Desktop Services clients. Users can connect to an RD Session Host server to run programs, to save files, and to use network resources on that server. Users can access an RD Session Host server by using Remote Desktop Connection or by using RemoteApp. The good old simple remote desktop can also host a full Windows desktop for remote clients so that they can run programs, save files and do all that stuff. Why do they write about it like it's such a great new invention, besides that they want to sell it? RDSH doesn't seem all that different at all. What do I install when I install RDSH, since all those features are already there in Windows? What's even more confusing is that you need to take special care when you want to install applications to an RDSH so that they will be usable by many concurrent users. Why? All the modern applications install the program files in one directory, store some common settings in the ProgramData folder and the HKLM hive, and store user specific settings in the Users folder and the HKCU hive. They are designed to be usable by many users on the same machine. 2 or 2000 users can use them concurrently without any efforts. I can sign in with 2 users to a server with only remote desktop enabled, and both of us can run Word or anything without any problems, can't we? So what changes if I set RDSH to install mode, or what happens if I don't? Why is the feature to switch between install and execute mode there at all? Yes I know of some advantages in Remote Desktop Services, like there's no 2 user limit, it supports virtualization, video acceleration and stuff, it has a whole infrastructure with gateway, web access, connection broker, etc. But I don't need those, so if you take these away, how are these two technologies different? From the articles it seems like they are completely different technologies, whereas it looks to me that they are completely the same at the core, and Remote Desktop Services just adds some additional features, but doesn't reinvent anything.

    Read the article

  • Windows Server 2003 W3SVC Failing, Brute Force attack possibly the cause

    - by Roaders
    This week my website has disappeared twice for no apparent reason. I logged onto my server (Windows Server 2003 Service Pack 2) and restarted the World Web Publishing service, website still down. I tried restarting a few other services like DNS and Cold Fusion and the website was still down. In the end I restarted the server and the website reappeared. Last night the website went down again. This time I logged on and looked at the event log. SCARY STUFF! There were hundreds of these: Event Type: Information Event Source: TermService Event Category: None Event ID: 1012 Date: 30/01/2012 Time: 15:25:12 User: N/A Computer: SERVER51338 Description: Remote session from client name a exceeded the maximum allowed failed logon attempts. The session was forcibly terminated. At a frequency of around 3 -5 a minute. At about the time my website died there was one of these: Event Type: Information Event Source: W3SVC Event Category: None Event ID: 1074 Date: 30/01/2012 Time: 19:36:14 User: N/A Computer: SERVER51338 Description: A worker process with process id of '6308' serving application pool 'DefaultAppPool' has requested a recycle because the worker process reached its allowed processing time limit. Which is obviously what killed the web service. There were then a few of these: Event Type: Error Event Source: TermDD Event Category: None Event ID: 50 Date: 30/01/2012 Time: 20:32:51 User: N/A Computer: SERVER51338 Description: The RDP protocol component "DATA ENCRYPTION" detected an error in the protocol stream and has disconnected the client. Data: 0000: 00 00 04 00 02 00 52 00 ......R. 0008: 00 00 00 00 32 00 0a c0 ....2..À 0010: 00 00 00 00 32 00 0a c0 ....2..À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 92 01 00 00 ... With no more of the first error type. I am concerned that someone is trying to brute force their way into my server. I have disabled all the accounts apart from the IIS ones and Administrator (which I have renamed). I have also changed the password to an even more secure one. I don't know why this brute force attack caused the webservice to stop and I don't know why restarting the service didn't fix the problem. What should I do to make sure my server is secure and what should I do to make sure the webserver doesn't go down any more? Thanks.

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >