Search Results

Search found 4310 results on 173 pages for 'terminate handler'.

Page 90/173 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Way to speed up load-balanced ssl using nginx?

    - by paulnsorensen
    So the setup for our website is 4 nodes running rails 3 and nginx 1 that all use the same GoDaddy certificate. Because we are a paid site, we have to maintain PCI-DSS compliance and thus have to use the more expensive SSL ciphers -- also we force SSL using Rack. I've recently switched over to Linode's NodeBalancer (which I've read is an HACluster), and we're not getting the performance we'd ideally like. From what I've read, it looks like terminating the SSL on the nodes using the high cipher is what is causing the poor performance, but I'd like to be thorough. Is there anything I can do? I've read about other ways to terminate the SSL before the NodeBalancer (like using stud), but I don't know enough about these solutions. We certainly don't want to do anything experimental or anything that has a single point of failure. If there really isn't anything I can do to speed up the SSL handshake, my alternative would be to support certain pages on Rails using a secure and insecure subdomain. I've found a few guides that walk through that, but my resulting question is in this situation, would it be better to have nginx handle forcing ssl on the secure subdomain instead of rails? Thanks!

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • Terminating multi-mode fiber

    - by murisonc
    I'm looking at the feasibility of terminating multi-mode fiber connections ourselves. We would be using LC connectors. I've done some research and found two different methods. One requires polishing the ends and using epoxy while the other doesn't. I like the idea of not having to polish the ends but there doesn't seem to be much information on quality or ease of use. I've found two vendors (3M and Corning) that offer kits for terminating fiber without polishing or using epoxy. Does anyone have any experience with both methods that can offer some advice? Copper is easy but fiber seems to be a whole different animal. EDIT: After looking into fusion splicing suggested in the answer I've determined it's not for us. It's my understanding that is primarily used for outside plant and is better suited for single mode fiber. It's a good answer but doesn't address the question directly. Some more information about our situation. We will only be terminating multi-mode fiber inside a building and only doing between 4 and 20 pair a year. Hiring an outside person won't work due to our location. There are currently a couple people on-site that can terminate fiber (working for another company and charging large fees) but they can only do ST and SC connectors and we only use LC. So once again does anyone have experience with terminating using both epoxy type connectors and the other type (similar to Corning Unicam)?

    Read the article

  • Nagios notifications definitions

    - by Colin
    I am trying to monitor a web server in such a way that I want to search for a particular string on a page via http. The command is defined in command.cfg as follows # 'check_http-mysite command definition' define command { command_name check_http-mysite command_line /usr/lib/nagios/plugins/check_http -H mysite.example.com -s "Some text" } # 'notify-host-by-sms' command definition define command { command_name notify-host-by-sms command_line /usr/bin/send_sms $CONTACTPAGER$ "Nagios - $NOTIFICATIONTYPE$ :Host$HOSTALIAS$ is $HOSTSTATE$ ($OUTPUT$)" } # 'notify-service-by-sms' command definition define command { command_name notify-service-by-sms command_line /usr/bin/send_sms $CONTACTPAGER$ "Nagios - $NOTIFICATIONTYPE$: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ ($OUTPUT$)" } Now if nagios doesn't find "Some text" on the home page mysite.example.com, nagios should notify a contact via sms through the Clickatell http API which I have a script for that that I have tested and found that it works fine. Whenever I change the command definition to search for a string which is not on the page, and restart nagios, I can see on the web interface that the string was not found. What I don't understand is why isn't the notification sent though I have defined the host, hostgroup, contact, contactgroup and service and so forth. What I'm I missing, these are my definitions, In my web access through the cgi I can see that I have notifications have been defined and enabled though I don't get both email and sms notifications during hard status changes. host.cfg define host { use generic-host host_name HAL alias IBM-1 address xxx.xxx.xxx.xxx check_command check_http-mysite } *hostgroups_nagios2.cfg* # my website define hostgroup{ hostgroup_name my-servers alias All My Servers members HAL } *contacts_nagios2.cfg* define contact { contact_name colin alias Colin Y service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email,notify-service-by-sms host_notification_commands notify-host-by-email,notify-host-by-sms email [email protected] pager +254xxxxxxxxx } define contactgroup{ contactgroup_name site_admin alias Site Administrator members colin } *services_nagios2.cfg* # check for particular string in page via http define service { hostgroup_name my-servers service_description STRING CHECK check_command check_http-mysite use generic-service notification_interval 0 ; set > 0 if you want to be renotified contacts colin contact_groups site_admin } Could someone please tell me where I'm going wrong. Here are the generic-host and generic-service definitions *generic-service_nagios2.cfg* # generic service template definition define service{ name generic-service ; The 'name' of this service template active_checks_enabled 1 ; Active service checks are enabled passive_checks_enabled 1 ; Passive service checks are enabled/accepted parallelize_check 1 ; Active service checks should be parallelized (disabling this can lead to major performance problems) obsess_over_service 1 ; We should obsess over this service (if necessary) check_freshness 0 ; Default is to NOT check service 'freshness' notifications_enabled 1 ; Service notifications are enabled event_handler_enabled 1 ; Service event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts notification_interval 0 ; Only send notifications on status change by default. is_volatile 0 check_period 24x7 normal_check_interval 5 retry_check_interval 1 max_check_attempts 4 notification_period 24x7 notification_options w,u,c,r contact_groups site_admin register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL SERVICE, JUST A TEMPLATE! } *generic-host_nagios2.cfg* define host{ name generic-host ; The name of this host template notifications_enabled 1 ; Host notifications are enabled event_handler_enabled 1 ; Host event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts max_check_attempts 10 notification_interval 0 notification_period 24x7 notification_options d,u,r contact_groups site_admin register 1 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL HOST, JUST A TEMPLATE! }

    Read the article

  • Programmatically add/delete users in Exchange

    - by Terry Gamble
    I've got the following set up: ASP.Net site that allows my internal employees to add in new hire information (no secure data, just stuff like name/address/phone) and when they submit this it goes into a database (SQL). Every few minutes a service runs that checks the database and if there are new entries it will add them into Exchange. The issue is I'm not happy with the way the service is doing things, (It's not putting address, etc in it). As I don't have the source code this I'm thinking of recreating it. My issue though is finding a starting point even. I know I'll have to create the scripts through code where the data is retrieved from SQL : Joe Smith 123 Main Street Nowhere, USA 19999 And put that into a powershell cmdlet (not sure exactly the syntax but I can get that figured out unless someone already has it) where the user is created in the Active Directory as a normal user and the mailbox is created simultaneously. From there I just need to fill out fields in Active Directory with the person's address, etc. Finally a deletion routine for when we terminate someone, however I'm sure that it will simply be a cmdlet that is easily shelled out to much like the initial one is, once I can figure out how to start that... Anyone have some good reference points or have already done it and can share?

    Read the article

  • Running Jackd on Ubuntu for my External Firewire Sound card

    - by Asaf
    Hello, I'm running Ubuntu 10.04 and I have an external Sound card: Phonic Firefly 302. I've connected the device, installed Jackd, added the lines: @audio - rtprio 99 @audio - memlock 500000 @audio - nice -10 to /etc/security/limits.conf logged out, logged back in, ran qjackctl (sudo qjackctl to be exact), ran the settings and chose "firewire" on the driver option, pressed "Start" and that was the output: 20:10:19.450 Patchbay deactivated. 20:10:19.578 Statistics reset. 20:10:19.601 ALSA connection graph change. 20:10:19.828 ALSA connection change. 20:10:21.293 Startup script... 20:10:21.293 artsshell -q terminate sh: artsshell: not found 20:10:21.695 Startup script terminated with exit status=32512. 20:10:21.695 JACK is starting... 20:10:21.695 /usr/bin/jackd -dfirewire -r44100 -p1024 -n3 jackd 0.118.0 Copyright 2001-2009 Paul Davis, Stephane Letz, Jack O'Quinn, Torben Hohn and others. jackd comes with ABSOLUTELY NO WARRANTY This is free software, and you are welcome to redistribute it under certain conditions; see the file COPYING for details 20:10:21.704 JACK was started with PID=22176. no message buffer overruns JACK compiled with System V SHM support. loading driver .. libffado 2.0.0 built Mar 31 2010 14:47:42 firewire ERR: Error creating FFADO streaming device cannot load driver module firewire no message buffer overruns 20:10:21.819 JACK was stopped successfully. 20:10:21.819 Post-shutdown script... 20:10:21.822 killall jackd jackd: no process found 20:10:22.230 Post-shutdown script terminated with exit status=256. 20:10:23.865 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Error: "/tmp/kde-asaf" is owned by uid 1000 instead of uid 0.

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • How do I get the F1-F12 keys to switch screens in gnu screen in cygwin when connecting via SSH?

    - by Mikey
    I'm connecting to a desktop running cygwin via SSH from the terminal app in Mac OS X. I have already started screen on the cygwin side and can connect to it over the SSH session. Furthermore, I have the following in the .screenrc file: bindkey -k k1 select 1 # F1 = screen 1 bindkey -k k2 select 2 # F2 = screen 2 bindkey -k k3 select 3 # F3 = screen 3 bindkey -k k4 select 4 # F4 = screen 4 bindkey -k k5 select 5 # F5 = screen 5 bindkey -k k6 select 6 # F6 = screen 6 bindkey -k k7 select 7 # F7 = screen 7 bindkey -k k8 select 8 # F8 = screen 8 bindkey -k k9 select 9 # F9 = screen 9 bindkey -k F1 prev # F11 = prev bindkey -k F2 next # F12 = next However, when I start multiple windows in screen and attempt to switch between them via the function keys, all I get is a beep. I have tried various settings for $TERM (e.g. ansi, cygwin, xterm-color, vt100) and they don't really seem to affect anything. I have verified that the terminal app is in fact sending the escape sequence for the function key that I'm expecting and that my bash shell (running inside screen) is receiving it. For example, for F1, it sends the following (hexdump is a perl script I wrote that takes STDIN in binmode and outputs it as a hexadecimal/ascii dump): % hexdump [press F1 and then hit ^D to terminate input] 00000000: 1b4f50 .OP If things were working correctly, I don't think bash should receive the escape sequence because screen should have caught it and turned it into a command. How do I get the function keys to work?

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • Debian - Secure system from current administrator

    - by netadmin
    Hello, I am the Network and Systems Administrator in an organization of just under 500 users. We have a number of Windows Servers, and that is certainly my area of expertise. We also have a very small handful of Debian servers. We are about to terminate the sysadmin of these Debian systems. Short of powering down the systems, I would like to know how I can ensure that the previous admin does not have control of these systems in the future, at least until we hire a replacement linux sysadmin. I have physical/virtual-console access to each of the systems, so I can reboot them in various user-modes. I just don't know what to do. Please assume that I do not currently have root access to all of these systems (an oversight on my part that I now recognize.) I have some experience in Linux, and use it on my desktop on a daily basis, but I must admit that I am a competent user of linux, not a systems admin. I have no fear of the command line however.... Is there a list of steps that one should take to "secure" a system from somebody else? Again, I assure you that this is legit, I am re-taking control of my employer's systems, at the request of my employer. I hope to not have to shut the systems down permanently and still be reasonably certain that they are secure. Thanks for your time.

    Read the article

  • Real benefits of tcp TIME-WAIT and implications in production environment

    - by user64204
    SOME THEORY I've been doing some reading on tcp TIME-WAIT (here and there) and what I read is that it's a value set to 2 x MSL (maximum segment life) which keeps a connection in the "connection table" for a while to guarantee that, "before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead". Since segments received (apart from SYN under specific circumstances) while a connection is either in TIME-WAIT or no longer existing would be discarded, why not close the connection right away? Q1: Is it because there is less processing involved in dealing with segments from old connections and less processing to create a new connection on the same tuple when in TIME-WAIT (i.e. are there performance benefits)? If the above explanation doesn't stand, the only reason I see the TIME-WAIT being useful would be if a client sends a SYN for a connection before it sends remaining segments for an old connection on the same tuple in which case the receiver would re-open the connection but then get bad segments and and would have to terminate it. Q2: Is this analysis correct? Q3: Are there other benefits to using TIME-WAIT? SOME PRACTICE I've been looking at the munin graphs on a production server that I administrate. Here is one: As you can see there are more connections in TIME-WAIT than ESTABLISHED, around twice as many most of the time, on some occasions four times as many. Q4: Does this have an impact on performance? Q5: If so, is it wise/recommended to reduce the TIME-WAIT value (and what to)? Q6: Is this ratio of TIME-WAIT / ESTABLISHED connections normal? Could this be related to malicious connection attempts?

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • How can I cause Task Scheduler to "fail" if a dialog box returns a certain result?

    - by Roger
    I'm working on a VBScript to do a weekly reboot of all machines on our network. I want to run this script via Task Scheduler. The script runs at 3:00 AM, but there is a small chance that users may still be on the network at that time, and I need to give them the option to terminate the reboot. If they do so, I would like the reboot to occur the next night at 3:00 AM. I've set Task Scheduler up to repeat in this way. So far, so good. The problem is that if the user selects "Cancel" in my script, the Task Scheduler does not see my task as failed, and won't run it again the next night. Any ideas? Can I pass an errorcode to task scheduler or otherwise abort the task via VBScript? My code is below: Option Explicit Dim objShell, intShutdown Dim strShutdown, strAbort ' -r = restart, -t 600 = 10 minutes, -f = force programs to close strShutdown = "shutdown.exe -r -t 600 -f" set objShell = CreateObject("WScript.Shell") objShell.Run strShutdown, 0, false 'go to sleep so message box appears on top WScript.Sleep 100 ' Input Box to abort shutdown intShutdown = (MsgBox("Computer will restart in 10 minutes. Do you want to cancel computer restart?",vbYesNo+vbExclamation+vbApplicationModal,"Cancel Restart")) If intShutdown = vbYes Then ' Abort Shutdown strAbort = "shutdown.exe -a" set objShell = CreateObject("WScript.Shell") objShell.Run strAbort, 0, false End if Wscript.Quit Appreciate any thoughts.

    Read the article

  • How to interrupt software raid resync?

    - by Adam5
    I want to interrupt a running resync operation on a debian squeeze software raid. (This is the regular scheduled compare resync. The raid array is still clean in such a case. Do not confuse this with a rebuild after a disk failed and was replaced.) How to stop this scheduled resync operation while it is running? Another raid array is "resync pending", because they all get checked on the same day (sunday night) one after another. I want a complete stop of this sunday night resyncing. [Edit: sudo kill -9 1010 doesn't stop it, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one. [Edit2: What I did now was to make the resync go very slow, so it does not disturb anymore: sudo sysctl -w dev.raid.speed_limit_max=1000 taken from http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html During the night I will set it back to a high value, so the resync can terminate. This workaround is fine for most situations, nonetheless it would be interesting to know if what I asked is possible. For example it does not seem to be possible to grow an array, while it is resyncing or resyncing "pending"]

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • Is there a list of Windows services that can be turned off to save resources?

    - by scunliffe
    Every time I open up my task manager and look at the tasks running, it appears to me that there has got to be a bunch of junk running in there that I don't want or need and would be better off turning off (e.g. set the service to manual vs. automatic) In particular I'm running Windows XP, but I'd be interested in services for any version. e.g. Service: ThinkPad PM Service What is it: The ibmpmsvc.exe process is installed by default on IBM Notebook computers. It allows various functions of your IBM notebook to be controlled using the blue Fn keys. If you use the blue Fn keys on your Notebook, you should leave this process running. Otherwise, if it is causing problems for your system you should terminate the ibmpmsvc.exe process. (source) Thus for me, (having never used, or intending to use these buttons) - I want to turn it off. There's lots that just seem painfully unnecessary to me... I'm just not sure which ones I can "safely" stop/disable without issues... "Alerter", "ClipBook", "Messenger", "Telnet", "ATI Hotkey Poller", "Office Source Engine", etc. I'd appreciate any info on services that are truly unnecessary, or only useful to certain people/types of users.

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Mongo daemon restarts after replica set issue

    - by Matt Beckman
    We had a recent election in our replica set (2 read nodes; 1 write node) that changed the primary node. Curious as to why this occurred, I started looking through the logs to find out what happened. It appears that mongoNode2 could not communicate with mongoNode3. When both nodes could not communicate, it appears that this caused the services on mongoNode2 and mongoNode3 to restart, eventually resulting in a new primary after the services had been started again. Thu Jun 23 08:27:28 [ReplSetHealthPollTask] DBClientCursor::init call() failed Thu Jun 23 08:27:28 [ReplSetHealthPollTask] replSet info mongoNode3:27017 is down (or \ slow to respond): DBClientBase::findOne: transport error: mongoNode3:27017 query: { \ replSetHeartbeat: "myReplSet", v: 3, pv: 1, checkEmpty: false, from: \ "mongoNode2:27017" } Thu Jun 23 08:27:29 got kill or ctrl c or hup signal 15 (Terminated), will \ terminate after current cmd ends Thu Jun 23 08:27:29 [interruptThread] now exiting Thu Jun 23 08:27:29 dbexit: Is there any reason that the mongo service would restart due to a DBClientCursor::init call() failure? Is this a known bug? It should be noted that mongoNode2 and mongoNode3 are VMs on the same VMware host. MongoNode1 is not on the same host, and it did not have any issues with the service. However, I did not have any other reports of issues with other VMs on the VMware host.

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • Routing public IPs (each a /32) through a VPN to another server

    - by Lee S
    Hopefully the title makes sense; I have a server currently in a colo facility, with many IP addresses routed to it. They are individual IPs and not in a contiguous block. Due to vastly improved connectivity (fibre) at home I am slowly bringing my infrastructure in-house for managability and eventually, cost savings. What I would like to do though is use the IP addresses allocated to my existing server, at home. I have an IP block allocated to me on my new ISP connection, but for a couple of reasons I'd like to make use of the colo ones for now: Ease of transition - lots of domains, dns, hard-coded IPs in programs, etc. Connectivity fallback. If my primary line goes down and switches to fallback 1 (dsl) or fallback 2 (4G), I lose access to the ISP-allocated IP block of IPs that are only presented on the primary WAN interface. What I'd like to achieve is my home virtualisation server (Proxmox/Debian-based) "dials in" to the colo server in the colo facility (also Proxmox/Debian) via VPN or similar, and gets to make use of the IP addresses that currently terminate on the colo box. If the primary connection to my ISP goes down and one of the fallback routes kicks in, the VPN tunnel will just time out and then be re-established on the backup connection instead. I'm sure this is doable, but I have no idea how. I'm not afraid to get my hands dirty, I just don't really know where to start?

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >