Search Results

Search found 4835 results on 194 pages for 'practice'.

Page 148/194 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • duplicity fail: not promping for password: "Running 'sftp user@host' failed"

    - by Thr4wn
    I have two linode VPS accounts and I want to back up one onto the other (the reasons are mainly for fun and to practice server administration.) the short version Duplicity isn't even asking for my password, but immediately says "invalid SSH password" (but I can ssh into the other server). why? the long version When I run duplicity /home/me scp://[email protected]//root/backup I get Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) Invalid SSH password Running 'sftp [email protected]' failed (attempt #2) Invalid SSH password Running 'sftp [email protected]' failed (attempt #3) And it says Invalid SSH password immediately with no opportunity for me to actually type the password. When I type duplicity full -v9 --num-retries 4 /home/me scp://[email protected]//root/backup I get Main action: full Running 'sftp [email protected]' (attempt #1) State = sftp, Before = 'Connecting to 97.107.129.67... [email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) I can ssh into [email protected] fine, and in fact have the ip in known_hosts before I tried any of this. serer 1 (from which I'm running the duplicity command) is Linode's default Ubuntu 8 setup with only a handful of programs installed via apt-get. server 2 (represented by x.x.x.x) is literally only Linode's default Ubuntu 8 setup I previously tried using SystemImager -- would that have changed settings in a destructive way? (I have removed and rebooted since then) Isn't Duplicity supposed to prompt for password? Am I using it wrong? are there common mistakes/dependencies I need to know about? Is there any way that x.x.x.x could be setup that could make this not work (I used Linode's default Ubuntu 8 setup and barely )?

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Resolution Chart Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

  • Deploying workstations - best practices?

    - by V. Romanov
    Hi guys I've been researching on the subject of workstation deployment for a while, and found a ton of info and dozens different methods and tools, but no "best practice" method that doesn't lack at least one feature that i consider required for the solution to be perfect. I'm currently interested in windows workstation deployment, but if the tools can be extended to Linux, then it's an added value. I want the deployment tools I use to be able to do the following: hardware independent - I want my image or installation to have a minimum of hardware and driver dependency, so that i can use a single image/package for all workstations easily updatable - I want to be able to update my image as easily as possible without redeploying/rebuilding/reimaging all configurations PXE bootable deployment - I want the tools to be bootable off the network so that I don't need a boot cd/DOK. scriptable for minimum human input - Ideally, the tool should run automatically after being booted and perform a "default" deployment (including partitioning) unless prompted otherwise. i.e - take a pc, hook it up, power on, PXE boot and forget about it until the OS is deployed. I found no single product or environment that does all this. Closest i came to is the windows deployment services/WIM image format. I also checked out numerous imaging and deployment tools including clonezilla, ghost, g4u, wpkg and others, but most of them lack the hardware Independence and updatability features. We currently have a Symantec Ghost server setup that does imaging over the network, but I'm not satisfied with it as it has all the drawbacks i listed above. Do you have suggestions how to optimize the process of workstation deployment? How do you deploy them in your organization? Thanks! Vadim.

    Read the article

  • copying folder and file permissions from one user to another after switching domains [closed]

    - by emptyspaces
    Please excuse the title, this was the best way I could think to describe this scenario without an entire paragraph. I am using C#. Currently I have a file server running windows server 2003 setup on a domain, we will call this oldDomain, and I have about 500 user accounts with various permissions on this server. Because of restrictions out of my control we are abandoning this domain and using another one that is more dominant within the organization, we will call this newDomain. All of the users that have accounts on oldDomain also have accounts on newDomain, but the usernames are completely different and there is no link between the two. What I am hoping to do is generate a list of all user accounts and this appropriate sid's from AD on the oldDomain, I already have this part done using dsquery and dsget. Then I will have someone go through and match all of the accounts from oldDomain to the correct username on newDomain. Ultimately leaving me with a list of sids from oldDomain and the appropriate username from newDomain. Now I am hoping to copy the file and folder permissions from the old user from oldDomain to the new user on newDomain once I join the server to newDomain. Can anyone tell me what the best way to copy permissions from the sid to the user on newDomain? There are a bunch of articles out there about copying permissions from user a to user b but I wanted to check and see what the recommended practice is here since there are a ton of directories.

    Read the article

  • script calling script as other user

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root: : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for training user to be set. : su - training -c 'training_command' gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers a la Bash & 'su' script giving an error "standard in must be a tty" but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this message. I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice.

    Read the article

  • How do I find funny pictures?

    - by Hanno Fietz
    No, not lolcats. And I'm not really looking for a specific site, either. I have often wished that I had some funny picture to illustrate a presentation, a website, a post, an email, or something else. Google image search and stock photo services have hardly ever helped me, although that may be because I'm doing something wrong. Jeff Atwood seems to have no problem to find funny pictures for his codinghorror and stackoverflow blogs, as well as for the error messages on the trilogy sites. One of my favourites was this elephant. Other bloggers also seem to be quite good at it. I'm wondering if I simply lack the creativity or if there's sources or methods I don't know about. I could think of the following ways to get pictures, but I'm not sure whether this is really "how they do it". keep a collection of pictures that you stumbled upon and liked (takes quite some time to build up to a proper library), when you need a picture, there's one in there maybe have pictures on paper, too, like from magazines or ads. when you are looking for a picture, search online (Flickr, Google, stock photos). This has never really worked for me, I don't know why. produce the pictures yourself, i. e. have a good library of source material or find some online and apply some creativity and suitable software. I could imagine that this could work well once you have the practice.

    Read the article

  • SharePoint Search: processing filenames containing underscores

    - by Todd Owen
    We use SharePoint Server 2007 to allow employees to search network file shares, but it seems that underscores in filenames are not treated as word separators when indexing the files. As a result, a search for chocolate will: match "chocolate milkshake.doc" but not match "chocolate_cake.doc" (Of course, this is a simplified example; in practice the content of the second file might include the word "chocolate" and match on that instead of the filename. But the problem itself is real enough, because a common scenario in a corporate environment is that a user knows the the partial name of the file they are looking for and expects to see matching filenames at the top of the search results. And using underscores in filenames is a widely used convention within our company). Underscores are not treated as word separators in the file content either, although this is less of a concern for us. The root cause of this problem is possibly related to the behaviour of the word breakers that SharePoint uses (i.e. the language-specific DLLs that implement the IWorkBreaker interface), although I haven't confirmed this yet. Does anyone know of a workaround for this issue? I have tested with Search Server 2008 Express too (which is based on the same technology), and it is also affected. I do not know whether the problem is fixed in SharePoint 2010 or not.

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • MYSQL - Multiple set values in one update statement [migrated]

    - by Maurzank
    MYSQL - MULTIPLE SET VALUES IN ONE UPDATE STATEMENT USING 2 TABLES AS REFERENCE AND STORING VALUES IN ONE OF THOSE TABLES WITH A SPECIFIC LOGIC. Hello people, A problem came up by making an UPDATE. The example issue is as follows: CURRENUSRTABLE +------------+-------+ | ID | STATE | +------------+-------+ | 123 | 3 | | 456 | 3 | | 789 | 3 | +------------+-------+ HISTORYTABLE +------------+------------+-----+ | ID | TRDATE | ACT | +------------+------------+-----+ | 123 | 2013-11-01 | 5 | | 456 | 2013-11-01 | 5 | | 789 | 2013-11-01 | 5 | | 123 | 2013-11-02 | 4 | | 456 | 2013-11-02 | 4 | | 789 | 2013-11-02 | 4 | | 123 | 2013-11-03 | 3 | | 456 | 2013-11-03 | 3 | | 789 | 2013-11-03 | 3 | +------------+------------+-----+ I'm using these variables: @BA=3, @DE=5, @BL=4, What I'm trying to do is an update on CURRENUSRTABLE.STATE using HISTORYTABLE.ACT with the following logic: STATE value will be updated as ACT value, except when STATE value is 4 and ACT is 3, then STATE will be 5 I made this statement: UPDATE CURRENUSRTABLE RIGHT OUTER JOIN HISTORYTABLE ON HISTORYTABLE.ID=CURRENUSRTABLE.ID SET CURRENUSRTABLE.STATE= ( SELECT CASE HISTORYTABLE.ACT WHEN @DE THEN @DE WHEN @BL THEN @BL WHEN @BA THEN CASE CURRENUSRTABLE.STATE WHEN @BL THEN @DE ELSE @BA END END ORDER BY HISTORYTABLE.TRDATE,FIELD(HISTORYTABLE.ACT,@DE,@BL,@BA) ) WHERE HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-01' I'm intentionally using "RIGHT OUTER JOIN" and "HISTORYTABLE.TRDATE BETWEEN" because I'd like to change the values in CURRENUSRTABLE using a timeframe of more than one day. If I execute this statement many times using only one day (i.e. "BETWEEN '2013-11-01' AND '2013-11-01'" and then "BETWEEN '2013-11-02' AND '2013-11-02'"... etc ) it works perfectly, but if it is executed using the dates "BETWEEN '2013-11-01' AND '2013-11-03'" the results on CURRENUSRTABLE.STATE are 3, which is wrong, it should be 5. I think the problem relies on "CASE CURRENUSRTABLE.STATE" when uses "HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-03'", because it reads the STATE 9 times which has not been commited yet until the statement ends. Query OK, 9 rows affected (0.00 sec) Rows matched: 9 Changed: 9 Warnings: 0 Maybe the solution is very simple, but unfortunately I've not much practice on MySQL since I've worked with it less than 2 months :) Is there any suggestions to solve this issue? PD: MySQL version is 4.1.22, I know is very old an EOL, unfortunately I have to make these statements on this version. Thanks!

    Read the article

  • AuthBasicProvider: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand?

    Read the article

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • Cache updates when migrating DNS from one provider to another

    - by JohnCC
    This may be a Windows DNS specific question or a general DNS best practice question - I'm not sure! We migrated our 3rd party DNS provision from provider A to provider B. I noticed that our internal recursive windows DNS servers still had NS records cached for our domains pointing to provider A's servers, even though I changed the nameservers with our registrar several days ago, and even though selecting the properties of the cached records showed a TTL of 1 day. After 24 hours when the NS records in this cache have expired, will the DNS server go back to the TLD server for an update on the authority, or will it go by preference to dns1.providera.com since that is what it has cached? In this case I arranged to leave Provider A's servers up for a week to allow changes to propagate, so dns1.providera.com is still active and would still provide NS and SOA records that said that dns1.providera.com. was in charge of this domain. Given this fact, would the Windows DNS server ever go back to the TLD and pick up the authority changes, or would it just assume all was well and renew timestamps on its cached NS records? I wonder what would be the best approach to ensuring that caches pick this up. Should I:- (1) Leave Provider A's servers in place and active and wait for caches to catch up ... basically what we're doing now which seems to have issues - perhaps specifically for Windows servers, or perhaps more widely. (2) Leave Provider A's servers in place but change the NS and/or SOA information they provide to tell caches that new servers are in charge. (3) Remove Provider A's servers after 2*TTL to force remaining caches to update. The issue with (2) is that on Provider A's system I can't seem to change the NS or SOA information to anything other than their servers. The issue with (3) is that I'm not sure how a DNS server would behave in this case. When it couldn't reach the cached name servers, would it flush its cache and try a full recursive lookup, or would it just return an error, forcing the user to clear the cache manually? Thanks in advance!

    Read the article

  • Running multiple sites on a LAMP with secure isolation

    - by David C.
    Hi everybody, I have been administering a few LAMP servers with 2-5 sites on each of them. These are basically owned by the same user/client so there are no security issues except from attacks through vulnerable deamons or scripts. I am builing my own server and would like to start hosting multiple sites. My first concern is... ISOLATION. How can I avoid that a c99 script could deface all the virtual hosts? Also, should I prevent that c99 to be able to write/read the other sites' directories? (It is easy to "cat" a config.php from another site and then get into the mysql database) My server is a VPS with 512M burstable to 1G. Among the free hosting managers, is there any small one which works for my VPS? (which maybe is compatible with the security approach I would like to have) Currently I am not planning to host over 10 sites but I would not accept that a client/hacker could navigate into unwanted directories or, worse, run malicious scripts. FTP management would be fine. I don't want to complicate things with SSH isolation. What is the best practice in this case? Basically, what do hosting companies do to sleep well? :) Thanks very much! David

    Read the article

  • What privilege level is required on a Windows client workstation on an ActiveDomain to break file lo

    - by Mike Burton
    I'm not sure if I should be asking this here or on StackOverflow, but here goes: I'm part of a team maintaining a document management application, and I'm trying to figure out Windows file locking permissions. We use a utility somebody downloaded years ago called psunlock to remotely close all locks on a file. We recently discovered that this does not work across different domains on our VPN. A little bit of digging lead me to the samba manual's discussion of file locking. I still don't really "get it", though. Does anyone have any insight to share into how the process of locking and breaking locks on files works in a network context? My thinking is that privileges are required both on the file appliance and on the client workstations which hold locks. Is that accurate? Can anyone give a more specific version? Ideally I'm looking for something along the lines of A user must have privilege level X in order to break locks held from a client workstation. In practice I'd be happy with a hotlink to a good white paper on the subject.

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • Securing SSH/SFTP and best practices on security

    - by MultiformeIngegno
    I'm on a fresh VPS with Ubuntu Server 12.04. I wanted to ask you the good practices to apply to enhance security over a stock Ubuntu-server. This is what I did up to now: I added Google Authenticator to SSH, then I created a new user (whom I'll use instead of 'root' for SSH & SFTP access) which I added to my /etc/sudoers list below 'root', so now it's: # User privilege specification root ALL=(ALL:ALL) ALL new_user ALL=(ALL:ALL) ALL Then I edited sshd_config and set PermitRootLogin to 'no'. Then restarted the ssh service. Is this ok? There are a few things I'd like to ask you though: 1) What's the sense of adding a new (sudoer) user whilst the root user still exist (ok it can't access with root privilege but it's still there..)? 2) System files are owned by 'root'.. I want to use my new_user to access via SFTP but with it I can't edit those files!! Should I mass-CHMOD 'em so that new_user has write perms too? What's the good practice on this? Thanks in advance, I hope you'll tell me if I did something wrong and/or other ways to secure the system. :)

    Read the article

  • Real benefits of tcp TIME-WAIT and implications in production environment

    - by user64204
    SOME THEORY I've been doing some reading on tcp TIME-WAIT (here and there) and what I read is that it's a value set to 2 x MSL (maximum segment life) which keeps a connection in the "connection table" for a while to guarantee that, "before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead". Since segments received (apart from SYN under specific circumstances) while a connection is either in TIME-WAIT or no longer existing would be discarded, why not close the connection right away? Q1: Is it because there is less processing involved in dealing with segments from old connections and less processing to create a new connection on the same tuple when in TIME-WAIT (i.e. are there performance benefits)? If the above explanation doesn't stand, the only reason I see the TIME-WAIT being useful would be if a client sends a SYN for a connection before it sends remaining segments for an old connection on the same tuple in which case the receiver would re-open the connection but then get bad segments and and would have to terminate it. Q2: Is this analysis correct? Q3: Are there other benefits to using TIME-WAIT? SOME PRACTICE I've been looking at the munin graphs on a production server that I administrate. Here is one: As you can see there are more connections in TIME-WAIT than ESTABLISHED, around twice as many most of the time, on some occasions four times as many. Q4: Does this have an impact on performance? Q5: If so, is it wise/recommended to reduce the TIME-WAIT value (and what to)? Q6: Is this ratio of TIME-WAIT / ESTABLISHED connections normal? Could this be related to malicious connection attempts?

    Read the article

  • Multiple LDAP servers with mod_authn_alias: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand? AuthBasicProvider ldap1 ldap2 only means that if mod_authz_ldap can't find the user in ldap1, it will continue with ldap2. It doesn't include the failover feature (ldap1 must be running and working fine)?

    Read the article

  • Windows Server 2008 R2 - VPN Folder Sharing Permissions

    - by daveywc
    I have setup VPN access to my Windows Server 2008 R2 server using RRAS. Clients can connect, run applications, view shares etc. My problem is that one of the applications that they use relies on some network shares. The application is not able to access the shares unless the user first goes into Windows Explorer and accesses the share, providing their user name and password (the same one that they use to connect via the VPN). Previously on a different Windows Server 2008 (not R2) this was not necessary i.e. the application and user could access the share without providing another user name and password. I have tried giving the Everyone group full control over the shared folder - both on the Security tab and in the Permissions area under Advanced Sharing on the Sharing tab. This still did not resolve the issue. (I don't really want to give Everyone access anyway - I was hoping that granting access to a group that the VPN users had membership of would be enough). I have also turned off password protected sharing in the Advanced Sharing Settings area of the Network and Sharing Center (under both Home or Work and Public). So my question is what is preventing my VPN users from having access to these folders without having to re-supply the same login and password that they use to access the VPN? And what is the best practice in this type of scenario?

    Read the article

  • how to edit source files and commit the changes to the new website?

    - by ajsie
    i've got ubuntu installed with lamp. im using webdav to upload/download files to/from the ubuntu web server, after i have edited the php source files in netbeans. however, i wonder what is best practice for editing source files and committing these changes to the new website. cause if we are 2-3 developers, i guess we have to use svn. but i have never used it before so i wonder how it works. should i install it and then select the /var/www (apaches webroot) as the repository folder? then when i check in, all the changes will apply immediately? could someone please explain following steps: how to download, edit the source files, upload the files and see the new changes in the website. cause i have only worked with a local apache before, and it was only me. now there will be some more programmers so i have to set up a decent, central environment for this, and have to know how netbeans, svn, webdav and apache works all together. thanks!

    Read the article

  • load average in top and procs in vmstat

    - by Mingfei.hua
    As far as I know, the load average in top is the numbers of precess(threads) in running or uninterrupted sleep status, So it should be equal to (procs-r +1 )+ procs-b in vmstat, but in practice, this two number always have big gap. Any wrongs in my understanding, appreciate so much if some guys give me some guide. top - 05:34:50 up 1 day, 20:56, 5 users, load average: 2.83, 2.67, 1.62 Tasks: 79 total, 1 running, 78 sleeping, 0 stopped, 0 zombie Cpu(s): 6.8%us, 1.8%sy, 0.0%ni, 91.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 1758000k total, 582636k used, 1175364k free, 103932k buffers Swap: 917500k total, 0k used, 917500k free, 180868k cached procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 1182524 103784 180860 0 0 1 9 6 53 7 2 91 0 0 0 0 0 1182524 103784 180860 0 0 0 36 70 117 0 0 100 0 0 0 0 0 1182516 103784 180860 0 0 0 0 73 132 0 1 100 0 0 0 0 0 1182516 103784 180860 0 0 0 0 60 127 0 0 100 0 0 1 0 0 1182516 103784 180860 0 0 0 0 62 102 0 0 100 0 0 0 0 0 1182628 103784 180860 0 0 0 0 289 238 1 2 97 0 0 2 0 0 1152160 103784 180892 0 0 0 8 1481 2371 54 12 34 0 0 1 0 0 1182192 103784 180860 0 0 0 0 681 834 19 4 78 0 0 0 0 0 1182200 103784 180860 0 0 0 0 80 147 0 1 100 0 0 0 0 0 1182200 103784 180860 0 0 0 0 53 107 0 0 100 0 0 0 0 0 1182208 103788 180856 0 0 0 72 64 123 0 0 100 1 0

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >