Search Results

Search found 4851 results on 195 pages for 'hosting'.

Page 182/195 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Exchange 2007 and migrating only some users under a shared domain name

    - by DomoDomo
    I'm in the process of moving two law firms to hosted Exchange 2007, a service that the consulting company I work for offers. Let's call these two firms Crane Law and Poole Law. These two firms were ONE firm just six months ago, but split. So they have three email domains: Old Firm: craneandpoole.com New Firm 1: cranelaw.com New Firm 2: poolelaw.com Both Firm 1 & Firm 2 use craneandpoole.com email addresses, as for the other two domains, only people who work at the respective firm use that firm's domain name, natch. Currently these two firms are still using the same pre-split internal Exchange 2007 server, where MX records for all three domains point. Here's the problem. I'm not moving both companies at the same time. I'm moving Crane Law two weeks before Poole Law. During this two weeks, both companies need to be able to: Continue to receive emails addressed to craneandpoole.com Send emails between firms, using cranelaw.com and poolelaw.com accounts I also have a third problem: I'd like to setup all three domains in my hosting infrastructure way ahead of time, to make my own life easier What would solve all my problems would be, if there is some way I can tell Exchange 2007, even though this domain exists locally forward on the message to the outside world using public MX record as a basis for where to send it (or if I could somehow create a route for it statically that would work too). If this doesn't work, to address points #1 when I migrate Crane Law, I will delete all references locally to cranelaw.com on their current Exchange server, and setup individual forwards for each of their craneandpool.com mailboxes to forward to our hosted exchange server. This will also take care of point #2, since the cranelaw.com won't be there locally, when poolelaw.com tries to send to cranelaw.com, public MX records will be used for mail routing decisions and go to my hosted exchange. The bummer of that though is, I won't be able to setup poolelaw.com ahead of time in hosted Exchange, will have to wait to do it day of :( Sorry for the long and confusing post. Just wondering if there is a better or simpler way to do what I want? Three tier forests and that kind of thing are out, this is just a two week window where they won't be in the same place.

    Read the article

  • Why can I view my site over a 3G connection but not through my wifi?

    - by Jonathan
    So, I am sitting in my office with four computers on the same network and internet connection. Two of the computers can visit this particular website. Two of the computer get a message "Google Chrome could not find". I have tried FF and IE also with the same problem. I can view the site 90% of the time on two of the working computers although the site seems slow and sometimes I also get the same errors as the other two computers. I have flushed the DNS, reset the router, tested the site on other peoples computers with success. Is this likely to be a site issue, an ISP issue, a hosting issue? Any advice is greatly appreciated. Here is the ping from the working machine: C:\Users\Jon>ping www.balihaicruises.com Pinging www.balihaicruises.com [208.113.173.102] with 32 bytes of data: Reply from 208.113.173.102: bytes=32 time=331ms TTL=47 Reply from 208.113.173.102: bytes=32 time=327ms TTL=47 Reply from 208.113.173.102: bytes=32 time=326ms TTL=47 Reply from 208.113.173.102: bytes=32 time=329ms TTL=47 Ping statistics for 208.113.173.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 326ms, Maximum = 331ms, Average = 328ms Traceroute: Tracing route to www.balihaicruises.com [208.113.173.102] over a maximum of 30 hops: 1 1 ms 17 ms 3 ms 192.168.1.1 2 42 ms 37 ms 36 ms 180.254.224.1 3 39 ms 47 ms 40 ms 180.252.1.69 4 36 ms 616 ms 57 ms 61.94.115.221 5 84 ms 76 ms 80 ms 180.240.191.98 6 73 ms 80 ms 72 ms 180.240.191.97 7 157 ms 143 ms 116 ms 180.240.190.82 8 115 ms 113 ms 120 ms ae1-123.hkg11.ip4.tinet.net [183.182.80.93] 9 331 ms 332 ms 335 ms xe-3-2-1.was14.ip4.tinet.net [89.149.184.30] 10 327 ms 330 ms 331 ms internap-gw.ip4.tinet.net [77.67.69.254] 11 437 ms 415 ms 350 ms border10.pc2-bbnet2.wdc002.pnap.net [216.52.127.73] 12 322 ms 823 ms 398 ms dreamhost-2.border10.wdc002.pnap.net [216.52.125.74] 13 328 ms 336 ms 326 ms ip-208-113-156-4.dreamhost.com [208.113.156.4] 14 326 ms 328 ms 336 ms ip-208-113-156-14.dreamhost.com [208.113.156.14] 15 327 ms 331 ms 333 ms apache2-udder.crisp.dreamhost.com [208.113.173.102] And then for the machine that doesn't work: C:\Users\Microsoft>ping www.balihaicruises.com Ping request could not find host www.balihaicruises.com. Please check the name and try again. C:\Users\Microsoft>tracert www.balihaicruises.com Unable to resolve target system name www.balihaicruises.com.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    Hi all, This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Apache server completely freezes until it gets restarted

    - by nbv4
    My server does this every few days. What sucks is that it always seems to do this right after I go to bed, so when I wake up, I'm greeted with the fact that my server has been down for the past 6 or 7 hours. When I first noticed this, I added a cronjob that tries to restart the server every 15 minutes, but I guess that didn't fix it. Once I noticed the server was down, I can this command: /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName ... waiting ...........................................................apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName httpd (pid 17597) already running ...which is odd, because a restart should restart the server, even if it's already running, correct? I eventually had to "stop" then "start" to get it working again. I then looked through the logs, and found something very weird. It seems that around the time the server crashed, the logs have entries that are wildly out of order. It looks a little like this: xx.xxx.xxx.x - - [21/Apr/2010:06:32:05 -0400] "GET / blah" xx.xxx.xxx.x - - [21/Apr/2010:06:51:25 -0400] "GET / blah" x.xx.xxx.xxx - - [21/Apr/2010:06:38:23 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:31:56 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:51:49 -0400] "GET / blah" xx.xx.xxx.xx - - [21/Apr/2010:06:33:20 -0400] "GET / blah" I don't think the problem is memory, because this: tells me that right before the crash, memory usage is fine. I'm running apache with the worker mpm, here are the settings for that: <IfModule mpm_worker_module> StartServers 1 MaxClients 100 MinSpareThreads 5 MaxSpareThreads 10 ThreadsPerChild 10 MaxRequestsPerChild 3000 </IfModule> This apache server is running a bunch of stuff, but most of the traffic comes from a django project I'm hosting, that uses mod_wsgi. There also is a simple machines forum that is running off of mod_fcgid. Those setting are below: <IfModule mod_fcgid.c> MaxRequestsPerProcess 500 MaxProcessCount 3 AddHandler fcgid-script .php .fcgi AddHandler cgi-script .cgi .pl FCGIWrapper "/usr/bin/php-cgi" .php </IfModule> Anyone know of anything else I can check? I've just about tweaked every single setting I can think of, yet these freezes still happen.

    Read the article

  • Apache refusing to change DocumentRoot

    - by mingos
    I've installed Zend Server CE 5.1.0 on Windows 7 Ultimate 64 bit in its default location, meaning the path to my htdocs is C:\Program Files (x86)\Zend\Apache2\htdocs. Not something that I would like to type each time I check out a project from SVN in Eclipse or something. I'd like to set the DocumentRoot to a different folder, namely D:\www. What I've done I edited conf/httpd.conf, with the significant lines being: DocumentRoot "D:\www" <Directory "D:\www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> Include conf/extra/httpd-vhosts.conf I edited conf/extra/httpd-vhosts.conf to add a virtual host: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot D:\www ServerName localhost ServerAlias localhost SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN localhost </VirtualHost> <VirtualHost *:80> DocumentRoot D:\www\UmbraCMS ServerName umbracms.local ServerAlias umbracms.local SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN umbracms.local </VirtualHost> I edited C:\Windows\System32\drivers\etc\hosts to add this line: 127.0.0.1 umbracms.local And I also added a PHP project to D:\www\UmbraCMS. And restarted Apache. Actually, I restarted the computer, too, just in case. What's supposed to happen After typing http://umbracms.local/ in the browser's address bar, I want to see my PHP project launch, obviously. What's actually happening No matter whether whether I type http://umbracms.local/ or http://localhost/, I'm taken to the test zend page, located in C:\Program Files (x86)\Zend\Apache2\htdocs\index.html, as if neither DocumentRoot was changed nor name-based virtual hosting worked. Interestingly, when I put another project in C:\Program Files (x86)\Zend\Apache2\htdocs\bugraid\ and then, in the browser, typed http://localhost/bugraid, the project actually opened, or at least tried to, as it completely ignored the project's .htaccess file. Extra considerations Zend Server's Apache version is 2.2.16, PHP version is 5.3.0 I've installed MySQL CE 5.5.13 separately, and it works, both from command line and via MySQL Workbench. I have XAMPP installed, but none of its components are started up. It's got its own install of Apache 2.2.17 and MySQL 5.5.1. PHP version is 5.3.5 (I think). Question Have you had a similar situation before? What else might need taking care of in order to have Zend Server's Apache use D:\www as document root for my PHP projects?

    Read the article

  • Nginx Proxying to Multiple IP Addresses for CMS' Website Preview

    - by Matthew Borgman
    First-time poster, so bear with me. I'm relatively new to Nginx, but have managed to figure out what I've needed... until now. Nginx v1.0.15 is proxying to PHP-FPM v.5.3.10, which is listening at http://127.0.0.1:9000. [Knock on wood] everything has been running smoothly in terms of hosting our CMS and many websites. Now, we've developed our CMS and configured Nginx such that each supported website has a preview URL (e.g. http://[WebsiteID].ourcms.com/) where the site can be, you guessed it, previewed in those situations where DNS doesn't yet resolve to our server, etc. Specifically, we use Nginx's Map module (http://wiki.nginx.org/HttpMapModule) and a regular expression in the server_name of the CMS' server{ } block to 1) lookup a website's primary domain name from its preview URL and then 2) forward the request to the "matched" primary domain. The corresponding Nginx configuration: map $host $h { 123.ourcms.com www.example1.com; 456.ourcms.com www.example2.com; 789.ourcms.com www.example3.com; } and server { listen [OurCMSIPAddress]:80; listen [OurCMSIPAddress]:443 ssl; root /var/www/ourcms.com; server_name ~^(.*)\.ourcms\.com$; ssl_certificate /etc/nginx/conf.d/ourcms.com.chained.crt; ssl_certificate_key /etc/nginx/conf.d/ourcms.com.key; location / { proxy_pass http://127.0.0.1/; proxy_set_header Host $h; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } (Note: I do realize that the regex in the server_name should be "tighter" for security reasons and match only the format of the website ID (i.e. a UUID in our case).) This configuration works for 99% of our sites... except those that have a dedicated IP address for an installed SSL certificate. A "502 Bad Gateway" is returned for these and I'm unsure as to why. This is how I think the current configuration works for any requests that match the regex (e.g. http://123.ourcms.com/): Nginx looks up the website's primary domain from the mapping, and as a result of the proxy_pass http://127.0.0.1 directive, passes the request back to Nginx itself, which since the proxied request has a hostname corresponding to the website's primary domain name, via the proxy_set_header Host $h directive, Nginx handles the request as if it was as direct request for that hostname. Please correct me if I'm wrong in this understanding. Should I be proxying to those website's dedicated IP addresses? I tried this, but it didn't seem to work? Is there a setting in the Proxy module that I'm missing? Thanks for the help. MB

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • sub domains with /etc/hosts and apache for gitorious

    - by QLands
    I managed to have a local install of Gitorious. Now I need to finalize the apache integration using a virtual server but nothing seems to work. See for example my /etc/hosts file: 127.0.0.1 localhost 172.26.17.70 darkstar.ilri.org darkstar 172.26.17.70 git.darkstar.ilri.org My vhosts.conf has the following entries: # # Use name-based virtual hosting. # NameVirtualHost *:80 <VirtualHost *:80> <Directory /srv/httpd/htdocs> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ServerName darkstar.ilri.org DocumentRoot /srv/httpd/htdocs ErrorLog /var/log/httpd/error_log AddHandler cgi-script .cgi </VirtualHost> <VirtualHost *:80> <Directory /srv/httpd/git.darkstar.ilri.org/gitorious/public> Options FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from All </Directory> AddHandler cgi-script .cgi DocumentRoot /srv/httpd/git.darkstar.ilri.org/gitorious/public ServerName git.darkstar.ilri.org ErrorLog /var/www/git.darkstar.ilri.org/log/error.log CustomLog /var/www/git.darkstar.ilri.org/log/access.log combined AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> ExpiresActive On ExpiresDefault "access plus 1 year" </FilesMatch> FileETag None RewriteEngine On RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] </VirtualHost> Now, when I go with Firefox to darkstar.ilri.org it shows the default Apache screen: "It works!". but when I go to git.darkstar.ilri.org it waits for few seconds then falls to darkstar.ilri.org and the default apache page. No error is reported. If I run httpd -S I get: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost git.darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:37) Syntax OK The funny thing is that if I configure gotirious in a host called gitrepository, add 127.0.0.1 gitrepository and go with Firefox to gitrepository.. Gitorious works... But why not with git.darkstar.ilri.org? Many thanks in advance.

    Read the article

  • Internet Explorer cannot display page from apache with single SSL virtual host

    - by P.scheit
    I have a question that has come up somehow in different questions but I still can't find the solution, yet. My problem is that I'm hosting a site on apache 2.4 on debian with SSL and Internet Explorer 7 on windows xp shows Internet Explorer cannot display the webpage I have only ONE virtual host that uses ssl, but DIFFERENT virtual hosts that use http. Here is my config for the site with SSL enabled (etc/sites-avaible/default-ssl is NOT linked) <Virtualhost xx.yyy.86.193:443> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de DocumentRoot "/var/local/www/my-certified-domain.de/current/www" Alias /files "/var/local/www/my-certified-domain.de/current/files" CustomLog /var/log/apache2/access.my-certified-domain.de.log combined <Directory "/var/local/www/my-certified-domain.de/current/www"> AllowOverride All </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/www.my-certified-domain.de.crt SSLCertificateKeyFile /etc/ssl/private/www.my-certified-domain.de.key SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM SSLCertificateChainFile /etc/apache2/ssl.crt/www.my-certified-domain.de.ca BrowserMatch "MSIE [2-8]" nokeepalive downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost *:80> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de CustomLog /var/log/apache2/access.my-certified-domain.de.log combined Redirect permanent / https://www.my-certified-domain.de/ </VirtualHost> my ports.conf looks like this: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> the output from apache2ctl -S is like this: xx.yyy.86.193:443 www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:1) wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost staging.my-certified-domain.de (/etc/apache2/sites-enabled/010-staging.my-certified-domain.de:1) port 80 namevhost testing.my-certified-domain.de (/etc/apache2/sites-enabled/015-testing.my-certified-domain.de:1) port 80 namevhost www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:31) I included the solution for this question: Internet explorer cannot display the page, other browsers can, possibly htaccess / server error And I understand the answer from this question: How to setup Apache NameVirtualHost on SSL? In fakt: I only have one ssl certificate for the domain. And I only want to run ONE virtual host with ssl. So I just want to use the one ip for the ssl virtual host. But still (after rebooting / restarting / testing) internet explorer will still not show the page. When I intepret the apachectl -S as well, I already have only one SSL host and this should response to the initial SSH handshake, shouldn't it? What is wrong in this setup? Thank you so much Philipp

    Read the article

  • POSTFIX bouncing when destination is my domain

    - by ZeC
    I am using provider mail hosting to send emails. On my Webserver I also have Postfix running and configured. Here is my main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = yes readme_directory = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = 2-5-8.bih.net.ba alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = bhcom.info, 2-5-8.bih.net.ba, localhost.bih.net.ba, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = mailbox_size_limit = 10485760 recipient_delimiter = + inet_interfaces = 80.65.85.114 When I try sending email to my hosted domain name, every message gets bounced with this error: Nov 4 20:38:34 2-5-8 postfix/pickup[802]: 1492A3E0C6C: uid=0 from=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 1492A3E0C6C: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: from=<[email protected]>, size=348, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/local[990]: 1492A3E0C6C: to=<[email protected]>, relay=local, delay=0.12, delays=0.08/0.01/0/0.04, dsn=5.1.1, status=bounced (unknown user: "info") Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 28ED53E0C6D: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: from=<>, size=2056, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/bounce[991]: 1492A3E0C6C: sender non-delivery notification: 28ED53E0C6D Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: removed Nov 4 20:38:34 2-5-8 postfix/local[990]: 28ED53E0C6D: to=<[email protected]>, relay=local, delay=0.06, delays=0.03/0/0/0.02, dsn=5.1.1, status=bounced (unknown user: "razvoj") Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: removed However, when I try to @gmail.com, it sends message without problems, and here is log. What might be the issue? Nov 4 20:41:23 2-5-8 postfix/pickup[802]: B2EC63E0C6C: uid=0 from=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/cleanup[1022]: B2EC63E0C6C: message-id=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: from=<[email protected]>, size=350, nrcpt=1 (queue active) Nov 4 20:41:23 2-5-8 postfix/smtp[1024]: connect to gmail-smtp-in.l.google.com[2a00:1450:4001:c02::1a]:25: Network is unreachable Nov 4 20:41:24 2-5-8 postfix/smtp[1024]: B2EC63E0C6C: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[173.194.70.26]:25, delay=0.97, delays=0.08/0.01/0.27/0.62, dsn=2.0.0, status=sent (250 2.0.0 OK 1352058066 f7si2180442eeo.46) Nov 4 20:41:24 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: removed

    Read the article

  • Some domain names not resolving on local network

    - by Solignis
    I am not really sure where to start with this one... I have a small network setup with some linux servers (Ubuntu 11.04 Server). 2 servers are running BIND 9 (NS01, NS02), they are configured as master and slave respectively. 1 server is running Zimbra ZCS 7.1.1 (MX01), it has a private BIND 9 server running to achieve a split DNS configuration. This DNS server does not interact with the other two, it forwards queries it can resolve to the other 2 that is it. No zone transfers. Zimbra is hosting 3 domains at the moment, solignis.local, solignis.com, campbellsurvey.net. The problem From with in my network I cannot connect to mail.campbellsurvey.net. When I mean I cannot connect, I mean if I open firefox and type https://mail.campbellsurvey.net I go nowhere, the address is supposed to connect to my Zimbra webmail. But it goes nowhere, the odd thing is if I try the same task from outside of the network it brings the website up like normal. If I try to create an account in thunderbird to connect to the same server using IMAP4 or POP3 I get an error saying that thunderbird cannot find the domain name. Even the zimbra client fails too. It is like from with in my own walls that campbellsurvey.net does not exist. But if step outside I can get it work with no problem at all. I had thought maybe the problem was with the DNS server (BIND 9), so just to eliminate it as a possibility I configured a windows server I use for VMware VCenter as a DNS server to see what would happen. The result was the same, its like something is preventing connections to those domains, but I have checked various firewalls and such. I checked port forwards, etc. So I am running out of ideas I know this is not a lot of information to work from and I can give more details about certain things as needed. I am just trying to figure out what could be going wrong. Any help you could offer would be much appreciated.

    Read the article

  • Is this distributed database server idea feasible?

    - by David
    I often use SQLite for creating simple programs in companies. The database is placed on a file server. This works fine as long as there are not more than about 50 users working towards the database concurrently (though depending on whether it is reads or writes). Once there are more than this, they will notice a slowdown if there are a lot of concurrent writing on the server as lots of time is spent on locks, and there is nothing like a cache as there is no database server. The advantage of not needing a database server is that the time to set up something like a company Wiki or similar can be reduced from several months to just days. It often takes several months because some IT-department needs to order the server and it needs to conform with the company policies and security rules and it needs to be placed on the outsourced server hosting facility, which screws up and places it in the wrong localtion etc. etc. Therefore, I thought of an idea to create a distributed database server. The process would be as follows: A user on a company computer edits something on a Wiki page (which uses this database as its backend), to do this he reads a file on the local harddisk stating the ip-address of the last desktop computer to be a database server. He then tries to contact this computer directly via TCP/IP. If it does not answer, then he will read a file on the file server stating the ip-address of the last desktop computer to be a database server. If this server does not answer either, his own desktop computer will become the database server and register its ip-address in the same file. The SQL update statement can then be executed, and other desktop computers can connect to his directly. The point with this architecture is that, the higher load, the better it will function, as each desktop computer will always know the ip-address of the database server. Also, using this setup, I believe that a database placed on a fileserver could serve hundreds of desktop computers instead of the current 50 or so. I also do not believe that the load on the single desktop computer, which has become database server will ever be noticable, as there will be no hard disk operations on this desktop, only on the file server. Is this idea feasible? Does it already exist? What kind of database could support such an architecture?

    Read the article

  • Windows Server 2008 R2 Firewall - Interface specific rules

    - by Mehmet Ergut
    I'm trying to define per interface rules, much like it was in Server 2003. We will be replacing our old 2003 server with a new 2008 R2 server. The server runs IIS and SQL Server. It's a dedicated server at the hosting company. We use a OpenVPN connection from the office to access SQL server, RDesktop, FTP and other administrative services. Only http and ssh is listening on the public interface. On the old server running 2003, I was able to define global rules for http and ssh, and allow other services only on the vpn interface. I can't find a way to do the same on 2008 R2. I understand that there is the Network Location Awareness service, firewall rules are applied according to the current network location. But I don't understand the purpose of this on a server. The only close solution I found is to set the scope on the firewall rule and restrict remote ip addresses to the private subnet of the office. But the ports will still be listening on the public interface. So how can I restrict a firewall rule to the connections coming from the vpn interface ? A note on this page states that scoping a rule to an interface does not exist anymore: In earlier versions of Windows, many of these command accepted a parameter called interface. This parameter is not supported in the firewall context in Windows Vista or later versions of Windows. I can't believe that they simply decided to remove a core firewall functionality that every firewall has. There must be a way to restrict a rule to an interface. Any ideas ? I'm still unable to find an adequate solution to my problem. So for now, my workaround is this: Administrative services listen on VPN IP address Firewall rules restrict the scope to the local IP address of VPN Public services listen on all interfaces, no scope restriction on firewall rules This is not optimal, if I change the IP address of the VPN, I need to edit the firewall rules too. It won't be the case if the rules were bound to the interface.

    Read the article

  • FTP server questions

    - by Brad
    I'm currently trying to set up a home FTP server using debian and proftpd and I've run into a problem that has me confused. I have most things set up already, I believe, but I cannot access my ftp server using my external ip. I've forwarded the correct port on my router and I've checked http://www.yougetsignal.com/tools/open-ports/ to be sure that it is, in fact, opened. I've used telnet locally on my server to check that the port accepts connections. I am able to use ftp via LAN. But, I still cannot access anything externally. I'm thinking that there's still some router configuration to be done in order to fix this, such as routing all connections on my ftp port to my server via the internal ip, but I can't find any option on my router to do this. Is this a necessary step? There is an option to use DMZ hosting, but I'd rather avoid it if possible. I can provide additional information as requested, please let me know any information that you think could help at all. Thanks. -Brad PS - I have a Telus Actiontec Modem/Router Update - !! Trying my ftp server out at work, worked! I guess I did set it up correctly after all. What is confusing me, though, is why doesn't the server allow me to connect locally anymore? That seems very weird to me. Also, I don't really understand why I am denied outright if I attempt to connect from the same network using the external address. I'll look into it more when I get home, but thank you guys for your help. Update 2 - I found the problem with not being able to connect locally anymore. I was setting the masquerade address to my external IP and for some reason that was causing it to hang on MLSD when I connected using my LAN address. I've removed the masquerade address and I'm going to check if I need it at work tomorrow. I'll update this page if I find anything.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >