Search Results

Search found 5762 results on 231 pages for 'clients'.

Page 69/231 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Using our own certificate authority for business email encryption

    - by LumenAlbum
    I've read the available similar questions on serverfault but I haven't quite found a definite answer to the security aspect of it - hence here's my question: I'm administrator of an office working with tax data and we want to start using certificate-based eMail encryption with our clients. Considering the prices for issued certificates by VeriSign & Co I was wondering if we couldn't issue the necessary certificates with a certificate authority of our own. I realize that they do not offer the trust hierarchy that commercial certificates do but I don't see why we would need that. Most of our clients have small businesses and only 20% of them even exchange data with us via email. So if we were to issue certificates for those 20% and our employees, that would enable us to use encrypted emails. Of course they would have to trust our certificate authority and thus once receive our public root certificate. But if we would hand them out to them (or install it) personally, they'd know that it really is our certificate. Is thery a huge security risk that I am missing here? As long as nobody has access to our certificate authority server nobody should be able to interfere with security, right? And the client certificates would be generated and handed out by us, as well... Please advise me if I am making an error in judgement here and thank you in advance.

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • Changing default openVPN IP in linux server

    - by Lamboo
    The problem is that we have a public OpenVPN service. Pay €9.95 and you get an OpenVPN account at currently half a dozen of servers for a month. This means there are always and will always be some people who create a certain amount of abuse or trouble. On the long run, the external IP every OpenVPN user gets assigned is prohibited from editing Wikipedia, it might be banned by e-gold and on some popular webforums, one-click-hosters, etc. Not a pleasant experience for the 97% of our customers who use our service responsibly and legitimately to regain their privacy. So even if I could change the assigned external IP every few months; e. g. from 216.xx.xx.164 to 216.xx.xx.170, it would help us a lot to combat this abuse and to provide our paying clients with "fresh" IP addresses that aren't banned or restricted on some popular Internet sites and services, yet. Does anybody know how to change the first IP address assigned to the public interface in CentOS? So that e.g. OpenVPN in future doesn't give our OpenVPN clients the external IP 123.xx.xx.164 but rather 123.xx.xx.170?

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • weird routes automatically being added to windows routing table

    - by simon
    On our windows 2003 domain, with XP clients, we have started seeing routes appearing in the routing tables on both the servers and the clients. The route is a /32 for another computer on the domain. The route gets added when one windows computer connects to another computer and needs to authenticate. For example, if computer A with ip 10.0.1.5/24 browses the c: drive of computer B with ip 10.0.2.5/24, a static route will get added on computer B like so: dest netmask gateway interface 10.0.1.5 255.255.255.255 10.0.2.1 10.0.2.5 This also happens on windows authenticated SQL server connections. It does not happen when computers A and B are on the same subnet. None of the servers have RIP or any other routing protocols enabled, and there are no batch files etc setting routes automatically. There is another windows domain that we manage with a near identical configuration that is not exhibiting this behaviour. The only difference with this domain is that it is not up to date with its patches. Is this meant to be happening? Has anyone else seen this? Why is it needed when I have perfectly good default gateways set on all the computers on the domain?!

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • How to find the cause of locked user account in Windows AD domain

    - by Stephane
    After a recent incident with Outlook, I was wondering how I would most efficiently resolve the following problem: Assume a fairly typical small to medium sized AD infrastructure: several DCs, a number of internal servers and windows clients, several services using AD and LDAP for user authentication from within the DMZ (SMTP relay, VPN, Citrix, etc.) and several internal services all relying on AD for authentication (Exchange, SQL server, file and print servers, terminal services servers). You have full access to all systems but they are a bit too numerous (counting the clients) to check individually. Now assume that, for some unknown reason, one (or more) user account gets locked out due to password lockout policy every few minutes. What would be the best way to find the service/machine responsible for this ? Assuming the infrastructure is pure, standard Windows with no additional management tool and few changes from default is there any way the process of finding the cause of such lockout could be accelerated or improved ? What could be done to improve the resilient of the system against such an account lockout DOS ? Disabling account lockout is an obvious answer but then you run into the issue of users having way to easily exploitable passwords, even with complexity enforced.

    Read the article

  • Add static route through DHCP

    - by MathieuK
    I'm trying to get an OSX Lion Server to provide a static route to its clients (all OSX Lion) over DHCP. I can't get the client to actually apply the static route. So far, I've managed to get the DHCP server (BOOTPD) to actually serve the DHCP OPTION 33 (static_route) on the DHCP offers by editing /etc/bootpd.plist and adding something like: <key>dhcp_option_33</key> <data>[some base64 goes here]</data> .. and restarting the DHCP service. On the client I've managed to get the client to actually request the dhcp option by modifying and adding option 33 to the DHCPRequestedParameterList key: <key>DHCPRequestedParameterList</key> <array> ... keys snipped for brevity ... <integer>33</integer> </array> .. and rebooting the client. This makes the client request the static_route option from the DHCP server ( i can see the proper output in ipconfig getpacket en0 ) but it doesn't actually apply the rule. Has anyone ever succeeded in applying static_route options on OSX clients through DHCP?

    Read the article

  • iptables forwarding to a dummy interface

    - by madinc
    Hi, I'm trying to accomplish the following: I have a box with a service listening on a dummy interface (say 172.16.0.1), udp port 5555. Now what I'd like to do is to take packets that arrive on interfaces eth0 (1.1.1.1:5555) and eth1 (2.2.2.2:5555) and forward them to the service on the dummy interface, and have replies go back to clients out the same physical interface they came in. Clients must think they're talking to 1.1.1.1:5555 or 2.2.2.2:5555. I think I need a mix of iptables rules and packet marking, plus some iproute rules (if it's possible at all). What I tried is to catch packets coming in from eth0 and eth1, udp port 5555, and mark them with 1 and 2 respectively, and --save-mark in the connmark. Then I used a DNAT to 172.16.0.1. The service seems to be getting the packets. Now I'm not sure how to do the reverse. It seems that for packets originating from the box, you can't do anything before the routing decision, but that would be the place to restore the marks, and thus make a routing decision based on those. Here's what I have so far: iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j MARK --set-mark 1 iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j MARK --set-mark 2 iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j CONNMARK --save-mark iptables -t nat -A PREROUTING -m mark --mark 1 -j DNAT --to-destination 172.16.0.1 iptables -t nat -A PREROUTING -m mark --mark 2 -j DNAT --to-destination 172.16.0.1 # What next? As I said, I'm not even sure it can be done. To give a bit of background, it's an old OpenVPN installation that cannot be upgraded (otherwise I'd install a recent version that supports multihoming natively). Thanks for any help.

    Read the article

  • FreeRADIUS Default Answer

    - by jinanwow
    We are using FreeRADIUS with a MySQL database, authenticating users. We ran into an issue where are MySQL database was slow causing the max number of threads to be reached. The issue with this is, when the server couldn't answer the requests as there were no threads avaiable, it sent the response of Access-Reject to the clients. Our devices cache client connections and periodically checks with the server to see if they should still be allowed or to remove them. The equipment is designed that if there is no response from the server and a client is connected it will remain connected. The issue is, when the radius server is at its max threads, its default answer is to send access-reject (verified via packet capture), however we would like to change the default behavior to just ignore the request (keeping the clients connected). We have fixed the MySQL database issue for now, but I would like to change the default from Access-Reject, to just ignore the client altogeather. I have done research, but not able to find an answer to the question. Thanks in Advance.

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Options for small windows network setup without dedicated server?

    - by Mitch
    I'm very weak on networking and hope someone can point me in the right direction: I have written some windows client/server software which incorporates a database which is located on a windows server. I have a test installation running at a customer's office where the server has a static IP address. In this case its easy for the clients to access the database because of the fixed IP address. Also, customers with network servers generally have specialist support staff to set up my software, so its not such a problem for me. However I also need to offer the software to customers who have small offices with less than 10 PCs and no dedicated network server. In this case I want the customer to be able to nominate one PC as the database "server" and install my software and have the clients access it. But in this situation I believe the "server" PC may not have a dedicated IP address. Q1: What is the best way to set this up simply and make it work? Can I reliably reference the "server" by using its name, or is there a way to assign dummy fixed IP addresses? Ideally this needs to be workable on small networks running a mixture of XP/Vista/Windows7 as my target market may well have mixed OSes etc. I guess this would be akin to home networking? Many thanks Mitch

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • Migrate users from one Active Directory domain to another?

    - by Matt
    I work for a company that hosts desktops for a number of different companies. At the moment, all the clients access a single domain controller called HOSTING. Under that are groups for each company. Each of the hosting servers exist on the same network and so are therefore potentially browseable by other terminal servers. This has raised some security issues and I've found it a little tricky to manage the security. As well, it's possible to see who the other hosted companies are even though other users cannot see their data. What I'd like to do is isolate each clients terminal server/s into their own VLAN. In addition, I'm thinking that each TS would have it's own DC which could just run on the TS for that company. Overhead for a DC is fairly minimal. This would isolate users on that TS from seeing the other companies completely. Firstly, does this sound like a sensible plan? Second... if it is sensible, how would I go about pulling the accounts from the HOSTING domain to a new domain? ideally, without the need for users to change their passwords?

    Read the article

  • Using WebDAV for automated downloads

    - by Geo Ego
    I currently manage a number of sites (at one point about a dozen, currently four, but soon growing into the dozens or hundreds) that serve a piece of software to clients at their remote locations. Our web server is Windows SBS Server 2k3, and the remote servers are Windows Server 2k3.When we have new versions of the software, I upload this new software to a specific directory and rename it; each time the clients boot, they pull their software from that specific directory. With just a few sites, it's no problem for me to RDP in and copy the files over. As the number grows, this will quickly become quite unwieldy. So I'm thinking that WebDAV would be part of a solution, so that I could simply push the newest version to our server (Windows SBS Server 2003) and make it available to the sites to grab. However, on the remote server side, what are some suggestions for automating the download? I only want the servers to download the files during downtime (between 3 AM and 9 AM), and I only want them to download if there is a new version available. I had thought of writing a program that checked the files on the WebDAV server at a regular interval, compared a hash of the current software to a hash of the software on the server, and only downloaded if they were different, but I'm wondering if there is something I am unaware of that can automate the process.

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

  • Auto Log-Off Windows users - Windows 2003 domain

    - by thehatter
    Hi! I am trying to make windows clients automatically log off after some time, I have been trying to use the winexit.scr which I have seen working else where in a similar environment. After working though these instructions (I did read the comments and notice the original ADM provided is buggy) I've had no joy what so ever! Winexit.scr refuses to read any settings in the registry, even while using a test account I can access the required reg key(s); edit, add, and remove values. Essentially winexit.scr always uses it's default values: 30 second timeout, no forced log-out. What I really want is a 30 minute timeout with a forced log-out, closing all the users apps etc. I've tried removing and re-adding the ADM template, creating the GPO from scratch several times, giving various registry permissions - including full control to "Everybody" just for fun! Oh, clients are all win XP SP3, DC is win 2003 R2 SP2. So, can anybody suggest something? Cheers!

    Read the article

  • Can I use a Windows Server 2003 Domain Controller but my home router for DNS?

    - by NetworkingWannabie
    Hi All Probably easiest to start with a description of my current setup, which works (oh, and this is a home setup not an office or anything): I have an ADSL modem with a static IP address (192.168.128.1), and its DHCP capability is disabled. I have a permanently powered up Windows Server 2003 machine with a fixed IP (192.168.128.2) which provides my domain controller, dhcp, and dns. The default gateway for everything is my ADSL modem everything is setup to use the WS2003 machine as the primary DNS with the ADSL modem as Secondary DNS just in case the server goes down (everything includes the server itself). Lastly, just in case it's relevant, I have my DHCP leases set to infinite (or whatever the right term is). Everything is pretty hunky dory. Except, that is, for the fact that my server is ALWAYS on, and it isn't always used, so I'm burning juice that I don't need to - my server burns around 120W which isn't immense but isn't irrelevant either, so I'd like to put it into a stand-by state when it isn't being used (the more standby the better) and then get the clients to wake it up. Am I correct in assuming that this won't work at the moment - A given client would need an IP address to wake the machine up, and it needs to machine to be awake to get an IP - catch 22? Assuming I'm correct, can I move to using my router (which is always on) for DHCP? What impact will this have on DC and DNS? Alternatively, does anyone have a better way for me to achieve this? Can I get the server to wake up when it sees clients look for a DHCP server, etc? Wow, that came out longer than expected! Thanks for your help.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • SVN hangs on commit - any suggestions for troubleshooting?

    - by Richard Beier
    We're having a problem with SVN... Subversion clients such as TortoiseSVN hang when we commit any more than a few files at a time to our server. Everything appears to actually be committed successfully to the repository; but the client hangs after all the data has been transmitted. We're using version 1.4.4 of the SVN server. We use the svn:// protocol rather than http to connect. We've reproduced this problem with several clients: TortoiseSVN (1.6.10), AnkhSVN (2.1), and the Silk command-line client (1.6.12). This is happening for everyone on the team, though some people seem to be more affected than others. If someone commits only a few files, it often works; but with more than half a dozen files, it usually hangs. Does anyone have troubleshooting suggestions? This has been happening sporadically for a while, but it's become pretty consistent lately. We've been working around the issue by killing the hung SVN client, doing "svn cleanup", and then doing "svn up"; but sometimes that causes tree conflicts. Another workaround is to blow away the workspace and check it out again after every commit; but of course that's pretty annoying. Are there any diagnostics that could help us troubleshoot this? We're considering upgrading to SVN 1.6 server, and installing the server on a new machine; but we're wondering if there's an easier solution. Thanks for your help, Richard

    Read the article

  • IPtables - Accept Arbitrary Packets

    - by Asad Moeen
    I've achieved a lot on blocking attacks on GameServers but I'm stuck on something. I've blocked major requests of game-server which it aceepts in the form "\xff\xff\xff\xff" which can be followed by the actual queries like get status or get info to make something like "\xff\xff\xff\xff getstatus " but I see other queries if sent to the game-server will cause it to reply with a "disconnect" packet with the same rate as input so if the input rate is high then the high output of "disconnect" might give lag to the server. Hence I want to block all queries except the ones actual clients use which I suppose are in the form "\xff\xff\xff\xff" or .... so, I tried using this rule : -A INPUT -p udp -m udp -m u32 ! --u32 0x1c=0xffffffff -j ACCEPT -A INPUT -p udp -m udp -m recent --set --name Total --rsource -A INPUT -p udp -m udp -m recent --update --seconds 1 --hitcount 20 --name Total --rsource -j DROP Now where the rule does accept the clients but it only blocks requests in the form "\xff\xff\xff\xff getstatus " ( by which GameServer replies with status ) and not just "getstatus " ( by which GameServer replies with disconnect packet ). So I suppose the accept rule is accepting the simple "string" as well. I actually want it to also block the non-(\xff) queries. So how do I modify the rule?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >