Search Results

Search found 21546 results on 862 pages for 'infrastructure as a service'.

Page 425/862 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Different Servers for incoming mails

    - by André
    Hi everybody, not sure if what I want is possible so I´d appreciate any pointers. I have full control over the infrastructure (DNS and servers) Currently I receive mails for domain.tld. MX record for domain.tld is gw.domain.tld. gw then does some spam and virus checking and forwards the mails to the internal exchange server. GW is a Proxmox Mail Gateway Box (Free license) Now what I want is to distribute mails for different recipients to other mail servers. Basicly I only want [email protected] and [email protected] to go to the exchange as before, but all others go to a different mail server (based on linux). Any idea how I could achieve this?

    Read the article

  • Ganglia multicast with clustering

    - by luckytaxi
    Let's say I have two hosts. One acts as the server where gmetad and a local gmond resides. It also has the web interface. I then have a client that only has gmond configure as follows. Anyways, everything works fine if i remove the mcast_join line from the udp_recv_channel If I leave it as is the UI doesn't show any hosts. I'm following the quick start guide found here In my gmond.conf file i have the following. udp_send_channel { mcast_join = host1 port = 8661 ttl = 1 } udp_recv_channel { port = 8661 retry_bind = true mcast_join = host1 bind = host1 } In my gmetad.conf file i have. data_source "Infrastructure" host1:8661 host2:8661

    Read the article

  • Block SMTP connections from mail domains which don't themselves accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the original sender refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • How to remotely connect using perfmon?

    - by user36914
    Suprised there is not a ton of information on google when i search for this but there is not. Lot of people asking the question but i none of them have any good answers. I have a remote computer running hyper-v (server) running a Windows 7 x64 guest (guest). Occasionally i won't be able to remote desktop to guest. I will then remote to server and see that the guest instance is constantly using about 25% of the cpu. WHen i try to connect directly from server i will get the login screen but as soon as i type the password in it will just stay at the windows 7 login screen but the account names will disappear and it will not log in. It responds to pings though. I don't know how else to diagnose other than trying to run perfmon remotely. It only happens like every 3 weeks and i run it 24/7. So i'm trying to run remote desktop remotely. I tested this out on a local vm i have running under vmware. When i try to connect using perfmon to my local vm i get this error: "when attempting to connect to the remote computer the4 following system error occurred: the network path was not found" I found in another past to start the remote registry service and when i start the service i get this error: "No such interface supported" Anyways, how do i remotely connect to another machine with perfmon or if anyone has a better idea how i can diagnose the problem above then let me know.

    Read the article

  • Upgrading PHP, MySQL old-passwords issue

    - by Rushyo
    I've inherited a Windows 2k3 server running an XAMPP-installation from the stone age. I needed to upgrade PHP to facilitate an upgrade to MediaWiki to facilitate a new MediaWiki extension (to facilitate some documentation to facilitate doing my job to facilitate getting paid to facilit... you get the idea). However... installing a new version of PHP resulted in PHP's MySQL libraries refusing to communicate using MySQL's 'old style' 152-bit passwords. Not a problem in theory. The MySQL installation is post-4.1, so it should have the functionality to upgrade the user's passwords from 152-bit to 328-bit (what a weird hashing algorithm...). I ran the following: SET PASSWORD = PASSWORD('foo'); on MySQL but querying: SELECT user, password FROM mysql.user; returned just the same password I started out with - 152-bit. Now... I suspect you're thinking 'AHA! old-passwords is on!'. Unfortunately it's not - I've disabled it in the configuration (explicitly set it to 0), made doubly sure I have an absolute reference to that configuration file and ensured the service isn't using the --old-passwords flag. The service was reset after each and every operation. So I went onto another system and generated the 328-bit hash on there, copying the hash over to the first MySQL instance. Unfortunately, that didn't work either (I did remember to FLUSH PRIVILEGES). The application error is: "'mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool [...snip...] Is there anything else I can try to get PHP to recognise MySQL as not using the 'old insecure authentication'? MySQL seems to be stuck in 'old-passwords' mode and I can't get it out of it.

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

  • PHP application failed to connect after a network plugged back in

    - by tntu
    My data-center appears to have had some issues with their network and thus my server has suffered from on an off network connectivity for about an hour. After the connection has been completely re-established my code still kept reporting the same issue over and over until I have restarted the service. The code is a simple PHP code that loops forever checking the Apple feed-back server and then sleeps for a few minutes and then it begins all over again. Now I understand the error being generated if the network is down but once it got back up why did it continue until I have restarted the code? Does PHP have something that needs to be re-initialized or something?? Messges log: Dec 20 08:57:22 server kernel: r8169: eth0: link down Dec 20 08:57:28 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:29 server kernel: r8169: eth0: link down Dec 20 08:57:33 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:33 server kernel: r8169: eth0: link down Dec 20 08:57:37 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:38 server kernel: r8169: eth0: link down Dec 20 08:57:44 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:44 server kernel: r8169: eth0: link down Dec 20 08:57:52 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:52 server kernel: r8169: eth0: link down Dec 20 09:10:58 server kernel: r8169 0000:06:00.0: eth0: link up PHP Error: PHP Warning: stream_socket_client(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/push/feedback.php on line 36 Code Line 36: $apns = stream_socket_client('ssl://feedback.sandbox.push.apple.com:2196', $errcode, $errstr, 60, STREAM_CLIENT_CONNECT, $stream_context);

    Read the article

  • Port(s) not forwarding?

    - by user11189
    I have cable internet service through Charter Communications and feed two desktop computers through a Linksys RP614v3 router. One system is my wife's running WinXP Home Edition and the other is mine, running Vista Home Premium (sp1). I have port forwarding configured in the Linksys so I can access the Vista system remotely using TightVNC. Initially, it worked great and I was able to remotely tend email and access local files while out of town for work. Lately, the cable internet service appears to flicker intermittently and upon return, my Mailwasher program loses ability to access the net and I've been unable to make the remote connection. When I reset the port forwarded for email in the router control panel, Mailwasher functionality returns but as I'm home when that happens, I have no easy way to check remote access until the next time I'm on the road or at work. I'm at my wit's end -- the TightVNC client accesses fine from my wife's system from behind the modem/router setup but I don't know how to maintain whatever gets reset when I fiddle with the control panel and the need to do so at all is new. I accessed it fine for a week off and on while out of town a month ago and now I can't leave home and access it from work an hour later.

    Read the article

  • Establish direct cable connection between Windows 8 PCs in home network

    - by Marie. P.
    I'm running two PCs, a desktop and a laptop with Windows 8 Release Preview ("Build 8400"). They are connected to the same router in infrastructure mode, thereby having wireless internet. Due to often file synchronization between the machines I want to establish a cable connection that allows direct file transfer, without needing to use the wireless. When I plug in the cable (normal, not cross-over), I see in "Control Panel\Network and Internet\Network Connections": "Ethernet - unidentified Network" on both PCs. Transferring a file between both still only uses the WiFi via the Router. I noticed that when turning off the wifi on one PC, I can set up a shared internet connection that will work via Ethernet-cable, but since sometimes only one PC runs, sometimes the other one, I do not want to have the internet of one machine to be dependent on the other one being switched on. I do not have a crossover-cable, but since I did connect the PCs already successfully (just without both being on the internet), I'm sure that this should also work with a normal ethernet cable.

    Read the article

  • My new SSD is causing issues. How can I solve them?

    - by Allan
    Computer specs 1 TB harddisc 120 GB 520intel SSD 8 GB DDR3 RAM Athlon Phenom II x64 955 3. 2ghz DK DFI Lanparty FX7900 M3H3 motherboard ASUS ATI RADEON HD6970 2 GB I have bought a new SSD (Intel 520, 120 GB), and wanted to use this as my system disc. I removed the other harddisc, and installed the SSD with the newest firmware. And then Windows 7 I updated Windows 7 with no problems and then put back my old harddrive. I formatted that old harddrive just to clean up at the same time... So at this stage everything was perfect. My new SSD was set as Master 0 Primary it boots on it and I have 1 TB emptyu harddrive I can use for whatever I want. So far no errors at all Now here is the problem, I installed a few games and everytime I tried to play the computer would say Windows must restart because DCOM server process launcher service terminated, or it says Windows must now restart because the Plug and Play service terminated unexpectedly Most commonly this error is caused by a rootkit virus, well I have tried formatting my entire computer, and running every antivirus I could find, so that shouldn't be it. I've also read somewhere it might happen when there are hardware issues. That on the otherhand would make sense, as I just put in a new SSD. I don't expect you to know this error. I haven't found anyone who knew it yet. maybe you can me guide through what might have gone wrong when I placed in the SSD? What have I checked regarding the SSD? It displays the right name when the computer starts up It has the newest firmware Did a 'sfc /scannow' which told me everything was fine I don't know what to do from here. Everything seems to work great with the drive. when I start playing games my computer crashes.

    Read the article

  • Routing based on source address in Windows Server 2008 R2

    - by rocku
    I'm implementing a direct routing load balanced solution using Windows Server 2008 R2 as back-end server. I've configured a loopback interface with the external IP address. This works, I am receiving packets with the external IP address and respond to them appropriately. However our infrastructure requires that traffic which is being load-balanced should go through a different gateway then any other traffic originating from the server, ie. updates etc. So basicly I need to route packets based on source address (external IP) to another gateway. The built-in Windows 'route' command allows routing based on destination address only. I've tried setting a default gateway on the loopback interface and mangled with weak/strong host send/receive parameters on the interfaces, however this didn't work. Is there any way around this, possibly using third party tools?

    Read the article

  • Not able to connect to port different than 22 - OpenVPN

    - by t8h7gu
    I have OpenVPN network with 5 clients. Computer with Arch Linux which hosts OpenVPN server, It also hosts virtual machine with Computer with CentOS which is also connnected to OpenVPN subnet. Windows 8 which hosts virtual machine with CentOS. Both of them are connected to OpenVPN. Last one machine is virtual machine with CentOS which is hosted by computer with Ubuntu 14( which is not connected to OpenVPN. All machines in OpenVPN subnet are bolded. All phisical computers are in different networks. The problem is that when I use nmap to scan Windows and it's guest virtual machine it's saids that host seems down. When I force namp to scan specific port it shows filtered state: nmap -Pn -p 50010 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 19:49 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.11s latency). rDNS record for 10.8.0.3: node3.com PORT STATE SERVICE 50010/tcp filtered unknown Telnet also cannot connect to this port telnet n3 50010 Trying 10.8.0.3... telnet: Unable to connect to remote host: No route to host But ss on this host show's proper state of this port ss -anp | grep 50010 LISTEN 0 50 10.8.0.3:50010 *:* users:(("java",12310,271)) What might be possible reason of that and how to fix it? EDIT I've found that I am able to connect via telnet to ssh port: telnet n3 22 Trying 10.8.0.3... Connected to n3. Escape character is '^]'. SSH-2.0-OpenSSH_5.3 So it seems that it's not problem with Windows firewall. But I have no idea what it might be. Also nmap result for first thousand ports: nmap -Pn -p 1-1000 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 20:08 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.49s latency). rDNS record for 10.8.0.3: node3.com Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 77.87 seconds

    Read the article

  • What methods are there to configure puppet to serve resources for multiple environments?

    - by cclark
    I seem to come across two ways for using puppet in multiple environments: 1) Install a puppetmaster in each environment and only update the recipes from source control for that environment when ready to deploy the recipes in that environment. 2) Use one puppetmaster and use a variable in the puppet.conf of each client to specify the environment and then in the puppetmaster specify a different modulepath for each environment and each of those paths is updated to the branch of the recipe repository intended for that environment (e.g. dev, staging, production). Only running one puppetmaster seems like it is one less piece of infrastructure to keep running but there is some additional complexity in the configuration. Are there additional pros or cons to one of these methods or something which I'm missing entirely?

    Read the article

  • Slow DB Performance. Seems to be memory related.

    - by David
    I am seeing a pooorly performing web app with a SQL 2005 backend. The db is on a w2k3 machine with 4GB RAM. When I run perfmon on it I see the following. Page life expectancy is low. Consistently under 300 while the Buffer cache hit ratio is always 99% +. The target server memory is always 1618304 and the total server memory is always a number just below that. So it seems that it isn't grabbing enough of the available memory. I have AWE enabled, with the lock pages right for the SQL service account and have set a maximum of 2.25Gb... but it doesn't go near that. When I restart the SQL service the page life expectancy goes much higher, 1000+, and the total target memory starts at 0 and slowly works its way back up to the previous limit. Then it hits the limit and the page life expectancy goes back down massively to <300. So I'm guessing there is something limiting the amount of memory. Any ideas on what that would be and how I can fix it?

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • Rate of UDP packet loss over WLAN

    - by Martin
    While testing something with TFTP I noticed lots of timeouts (and slow speed as result) when I used my WLAN - and no problems when using a network cable. A quick test program sending/receiving UDP revealed that there are about 3-5% packets lost. While it's obvious that WLAN has to be less reliable than LAN, I have no knowledge what loss rates are considered 'normal' - and when there is a need to further investigate the network infrastructure. Are there 'typical' packet loss rates on WLAN (and other network technologies e.g. PowerLAN, WAN, ...? Thanks

    Read the article

  • Limit WSUS replication to only certain product classifications

    - by MDMarra
    I have four WSUS 3.0 SP2 servers that are geographically distributed. The server at our main site (we'll call it WSUS1), is the main WSUS server. All manual and auto-approvals happen here. The other three WSUS servers are replicas of this server. Currently, we are only controlling desktop OS updates through WSUS. I would like to control server OS updates through WSUS as well. There is no need for all of these server updates to be on WSUS servers at the remote sites. The only server that would need a copy of them is WSUS1. Is there a way to keep my current infrastructure as-is and add server OS updates only to WSUS1, even though the others are set up as replicas, or will I need to configure an additional WSUS server that's not replicated?

    Read the article

  • RabbitMQ Management console not working

    - by rrejc
    I have started with RabbitMQ. I have a (windows) machine on which I installed two RabbitMQ nodes as a service - I have choose the nodename, port and service name for each of them. The services are running normally (i see that they are listening in a netstat-a). I have also installed management plugin with "rabbitmq-plugins enable rabbitmq_management" and restarted both services. But the plugin isn't running - I dont see it listening in a netstat and I can't connect to the management console via browser. Any idea what could be wrong? Is there any log to see what is goind on? Updated: when I do rabbitmq-plugins list i get: c:\RabbitMq\sbin>rabbitmq-plugins list [e] amqp_client 3.0.1 [ ] cowboy 0.5.0-rmq3.0.1-git4b93c2d [ ] eldap 3.0.1-gite309de4 [e] mochiweb 2.3.1-rmq3.0.1-gitd541e9a [ ] rabbitmq_auth_backend_ldap 3.0.1 [ ] rabbitmq_auth_mechanism_ssl 3.0.1 [ ] rabbitmq_consistent_hash_exchange 3.0.1 [ ] rabbitmq_federation 3.0.1 [ ] rabbitmq_federation_management 3.0.1 [ ] rabbitmq_jsonrpc 3.0.1 [ ] rabbitmq_jsonrpc_channel 3.0.1 [ ] rabbitmq_jsonrpc_channel_examples 3.0.1 [E] rabbitmq_management 3.0.1 [e] rabbitmq_management_agent 3.0.1 [ ] rabbitmq_management_visualiser 3.0.1 [e] rabbitmq_mochiweb 3.0.1 [ ] rabbitmq_mqtt 3.0.1 [ ] rabbitmq_old_federation 3.0.1 [ ] rabbitmq_shovel 3.0.1 [ ] rabbitmq_shovel_management 3.0.1 [ ] rabbitmq_stomp 3.0.1 [ ] rabbitmq_tracing 3.0.1 [ ] rabbitmq_web_stomp 3.0.1 [ ] rabbitmq_web_stomp_examples 3.0.1 [ ] rfc4627_jsonrpc 3.0.1-git7ab174b [ ] sockjs 0.3.3-rmq3.0.1-git92d4ba4 [e] webmachine 1.9.1-rmq3.0.1-git52e62bc

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • Any ideas why Ettercap filters aren't seeing packet data?

    - by Bryan
    I'm using an Ettercap filter to detect a query response coming back from a particular service on a remote machine. When I see a response from the service, I'm searching through the data in the packet to see if an offset is a specific value, and if so I'm changing the value at another offset. Trouble is, when I try this on a new virtual machine I built my Ettercap filter's no longer getting any data in the DATA.data variable available to it. if(ip.proto == TCP && tcp.src == 17867) { msg("Response seen!\n"); if(DATA.data + 2 == "\0x01") { msg("Flag detected!\n"); DATA.data + 5 = 0x09; } } The filter's getting applied to the traffic because "Response seen!" messages get printed out by Ettercap. However, "Flag detected!" messages do not. I think DATA.data is indeed empty because if I change my second "if" statement to check for DATA.data == "" then the "Flag detected!" message gets printed. Any ideas why this may be happening?! Also, if this is the wrong site to be asking questions like this, please let me know. I wasn't sure if it fit better here or somewhere like superuser or serverfault. By the way, this is a cross-post from StackOverflow... I should have posted on this forum instead I think. :)

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

  • Recommendations for good Unix MTA / groupware solutions? [closed]

    - by Jez
    Possible Duplicate: Exchange server replacement that runs on Linux I'm setting up a Debian server, and one of the things I need on it is an MTA. I don't want to use something like Exim or Postfix because I want something that ties in SMTP, POP3, and IMAP all in one (a la Microsoft Exchange). Most MTAs also seem to be hellishly difficult to configure. Try and read the Exim documentation; you could do a university degree on it (I'm not kidding). When you can get an HTTP server like Cherokee which is easy to configure and has a nice web interface, do MTAs or groupware solutions need to be that hard? I'm aware that some people think "the Unix way" is to have lots of different interacting pieces of software (like maybe an SMTP MTA, POP3 service, webmail service, and overarching manager to tie them all together), but I think this is a situation where that just makes things a lot harder to deal with and one large software suite fits in much more nicely. So, I'm looking for good open source software suites that will run on Debian that: Combine (at least) SMTP, POP3, and IMAP Are easy(ish) to configure Have a nice configuration web interface or GUI Are not defunct projects I don't mind if it's groupware and offers calendaring too, but I would only be using the e-mail functionality for now. Another nice-to-have would be built-in webmail (if we're combining a bunch of functionality, why not?) Note however that I do NOT need Outlook support. I am not really looking for an "Exchange replacement drop-in". The suites I've found so far that seem to match the above criteria (and have appropriate licenses) are Citadel, Kolab, and Zimbra. I'd appreciate anyone who has experience with any of these giving me the pros and cons of them, such as how easy they are to configure and what their performance is like. I'd also appreciate any other suggestions for solutions that fulfil my criteria that I may have missed out.

    Read the article

  • What is the "real" difference between a NAS and NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over standard ethernet, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over NFS on servers?

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >