Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 308/354 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Fail2Ban adds iptable rules but they are not working?

    - by EApubs
    Fail2Ban just blocked my IP for 3 SSH attempts. It added the iptables rule and I can see it using the "sudo iptables -L -n" command. But I can still access the site and login through SSH! What might be the problem? Is it because im using CloudFlare? I have set Nginx to write the real IPs to the access logs instead of the Cloud Flare IP. Isn't it enough? Chain fail2ban-ssh (1 references) target prot opt source destination DROP all -- 119.235.14.8 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 The input chain : Chain INPUT (policy DROP) target prot opt source destination fail2ban-NoAuthFailures tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 fail2ban-nginx-dos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,8090 fail2ban-postfix tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 25,465 fail2ban-ssh-ddos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 ufw-before-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-reject-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-track-input all -- 0.0.0.0/0 0.0.0.0/0 LOG all -- 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4

    Read the article

  • Nginx: Serve static files out of a given directory - one level too deep

    - by Joe J
    I'm pretty new to nginx configs. I'm having some difficulty with a pretty basic problem. I'd like to host some static files at /doc (index.html, some images, etc). The files are located in a directory called /sites/mysite/proj/doc/. The problem is, is that with the nginx config below, nginx tries to look for a directory called "/sites/mysite/proj/doc/doc". Perhaps this can be fixed by setting the root to /sites/mysite/proj/, but I don't want to potentially expose other (non-static) assets in the proj/ directory. And for various reasons, I can't really move the doc/ directory from where it is. I think there is a way to use a Rewrite rule to solve this situation, but I don't really understand all the parts, so having some difficulty formulating the rule. rewrite ^/doc/(.*)$ /$1 permanent; I've also included a working example of hosting files out of a /sites/mysite/htdocs/static/ directory. > vim locations.conf location /static { root /sites/mysite/htdocs/; access_log off; autoindex on; } location /doc { root /sites/mysite/proj/doc/; access_log on; autoindex on; } 2011/11/19 23:49:00 [error] 2314#0: *42 open() "/sites/mysite/proj/doc/doc" failed (2: No such file or directory), client: 100.100.100.100, server: , request: "GET /doc HTTP/1.1", host: "myhost.com" Does anyone have any ideas how I might go about serving this static content? Any help is much appreciated. Thanks, Joe

    Read the article

  • Setting up MongoDB in High Performance Computing LSF linux cluster

    - by Dnaiel
    I am trying to run mongo in a LSF cluster computing environment where I have no admin control. Our sysadmin installed mongodb, but it is not running. Any ideas on what should I ask the server admin to do for it to run? Or if I could run it locally? [node1382]allelix> mongod --dbpath /users/dnaiel/ma/mongodb/ Tue Oct 2 21:33:48 [initandlisten] MongoDB starting : pid=22436 port=27017 dbpath=/seq/epigenome01/allelix/ma/mongodb/ 64-bit host=node1382 Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] ** WARNING: You are running on a NUMA machine. Tue Oct 2 21:33:48 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Tue Oct 2 21:33:48 [initandlisten] ** numactl --interleave=all mongod [other options] Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] db version v2.2.0, pdfile version 4.5 Tue Oct 2 21:33:48 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Tue Oct 2 21:33:48 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Oct 2 21:33:48 [initandlisten] options: { dbpath: "/users/dnaiel/ma/mongodb/" } Tue Oct 2 21:33:48 [initandlisten] journal dir=users/dnaiel/ma/mongodb/journal Tue Oct 2 21:33:48 [initandlisten] recover begin Tue Oct 2 21:33:48 [initandlisten] info no lsn file in journal/ directory Tue Oct 2 21:33:48 [initandlisten] recover lsn: 0 Tue Oct 2 21:33:48 [initandlisten] recover /seq/epigenome01/allelix/ma/mongodb/journal/j._0 Tue Oct 2 21:33:48 [initandlisten] recover cleaning up Tue Oct 2 21:33:48 [initandlisten] removeJournalFiles Tue Oct 2 21:33:48 [initandlisten] recover done Tue Oct 2 21:33:48 [websvr] admin web console waiting for connections on port 28017 Tue Oct 2 21:33:48 [initandlisten] waiting for connections on port 27017 It basically waits forever and cannot start mongodb. These servers are not webservers but they do have network access, it's a cloud computing LSF environment system. Any advice would be welcome, thanks in advance.

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • On what should i not be pennywise buying a machine for SqlServer 2008?

    - by Michel
    Hi, i'm going to do a project for a client and i'll be hosting the database server myself. Normally it would be on my dev machine, but there will also be data pushed into it during developing and testing, so i would like to setup a dedicated test sql server. But, as you might guess, i can't afford to go to Dell and buy one mega 16 core 16 GIG 10 TB raid 5 machine (wow, that sounds cool) So i have to save the money somewhere... the hardware only has to live for a year (longer is nice of course), and the sql server won't be hit too hard: i guess the average server will only see it as a cough once in a while. But i do want the machine to be a bit performant: if it does get some data, it must be a bit responsive. So my question is were can i leave out the expensive parts: is 2 GB enough, or must i take 4GB, is an average processor enough or should it be a top of the bill? Is Sql server a large resource user or is a simple desktop pc good enough? It wil run on win2008 by the way.

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Setting up WHM nameservers

    - by Miskone
    I am new to server administration and I've just got a dedicated root server from Hetzner. First I set up in Hetzner's robot DNS entries Registered nameservers: ns1.raybear.com 88.198.32.57 ns2.raybear.com 88.198.32.57 Under DNS entires I have buzz-buzz.me pointing to 88.198.32.57 // My server IP address and on my WHM I have DNS zone for buzz-buzz.me ; cPanel first:11.42.1.17 (update_time):1402062640 Cpanel::ZoneFile::VERSION:1.3 hostname:hosting.raybear.com latest:11.42.1.17 ; Zone file for buzz-buzz.me $TTL 14400 buzz-buzz.me. 86400 IN SOA ns1.raybear.com. miskone.gmail.com. ( 2014060605 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ) buzz-buzz.me. 86400 IN NS ns1.raybear.com. buzz-buzz.me. 86400 IN NS ns2.raybear.com. buzz-buzz.me. 14400 IN A 88.198.32.57 localhost 14400 IN A 127.0.0.1 buzz-buzz.me. 14400 IN MX 0 buzz-buzz.me. mail 14400 IN CNAME buzz-buzz.me. www 14400 IN CNAME buzz-buzz.me. ftp 14400 IN CNAME buzz-buzz.me. agent 14400 IN A 88.198.32.57 src 14400 IN A 88.198.32.57 platform 14400 IN A 88.198.32.57 But still I have some problems accesing buzz-buzz.me, agent.buzz-buzz.me and platform.buzz-buzz.me Also I have problem getting mails on Google account, I can send but not receive emails. How to solve this. As I said I am completly new here and I need urgent help.:(

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by blade
    Hi, I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • How to route public static IP to a virtual machine on a vmware ESXi host?

    - by Kevin Southworth
    I have 5 static IPs from my ISP (Comcast) and I have a physical machine with VMware ESXi 4.0 on it that is hosting multiple virtual machines. Right now I am just using the default vmware virtual network (vswitch0) with DHCP from the Comcast IP Gateway Router and everything is working fine. Each virtual machine can access the internet, etc. One of my virtual machines is a webserver (Windows Server 2008) and I want to assign it to 1 of my 5 static IPs so it's accessible from the public internet, while leaving the other VMs on the internal LAN still using DHCP. If I just plug my laptop directly into the Comcast IP Gateway (it has 4 ports on the back) and assign my laptop a Static IP using the windows networking dialogs, then I can hit my laptop from the public internet and it works great. However, if I try to do the same steps to set a static IP config on my Windows Server 2008 VM, it does not work. The VM cannot access the internet (open Firefox and try to visit google.com), and I cannnot see the VM from the public internet either. I'm assuming I'm missing something in the ESXi config somewhere, but I'm pretty new to ESXi and I'm not sure how to configure it to work this way.

    Read the article

  • Can many addon domains slow down cPanel or create problems?

    - by Marco Demaio
    I actually resell hosting plans by using one single cPanel account and many addon domains under it. Basically for each new user I don't create a new cP account, but I simply create a new ADDON domain and give him the necessary space. I know in this way the final user won't be able to manage his emails and he want be able to access all cPanel features, but that's ok. My only question is: "is there a limit on the number of cPanel addon domain that can be added?". I know in my account I'm allowed to add unlimited addon domains, but is there something that might happen that slows down cPanel or could create problems? I mean is there any suggestions you could give me about the way i'm using cPanel which might not be the usual way of using it. (As for instance: "be aware that Awstast could run very slow or crash", or "be aware that mails from different addon domains under the same cp account could create many problems.", etc.) Many thanks!

    Read the article

  • Cannot Update Adobe Creative Suit

    - by Lynda
    I am attempting to check for updates using the Adobe Application Manager on Windows 8 and I get this error: The update server is not responding. The server might be offline temporarily, or the Internet or firewall settings may be incorrect. I have seen this error before on Windows 7 and remembered that I needed to disable the Microsoft Virtual WiFi Miniport Adapter. I navigated to the correct place but the Miniport Adapter was not there. After a bit of searching across several forums I saw a recommendation to change <logLevel>2</logLevel> to <logLevel>10</logLevel>. I attempted that and again it still did not work. I have also tried disabling my anti-virus protection and disabling Windows Firewall. Neither worked. I am not sure what else to try or do at this point. Are there any other recommendations to solve this issue? Notes: I am running Windows 8 Pro using the Creative Cloud Version of CS6. I check for updates by choosing Help Updates from within any Adobe Application I originally posted this question on the Adobe Forums and received no help there.

    Read the article

  • Win 2003 SBS - secure enough by default?

    - by Pekka
    I have to set up a Windows 2003 Small Business Server to work as a Subversion repository and possibly as an E-Mail server later. The machine is a virtual one, hosted with a hosting company, and freshly initialized. I used the Security Configuration Wizard to deactivate all server roles. After I install Subversion, I will open the necessary ports for the service; in addition, obviously, RDP will stay open so I can remote control the machine. Automatic updates are activated, and I will set up E-Mail notification every time somebody logs on to the server. I'm a programmer and not a professional systems administrator, so I would like to know whether you would regard this a sane and secure setup for a (publicly available) box to host sensitive code and/or E-Mail on. Is there anything in addition I should do to make the machine secure? Is there anything I can do on a long-term basis to keep the machine secure, apart from monitoring the event log (as far as I can make sense out of it), and seeing that any hotfixes are installed properly?

    Read the article

  • Why can't I update Adobe Creative Suite?

    - by Lynda
    I am attempting to check for updates using the Adobe Application Manager on Windows 8 and I get this error: The update server is not responding. The server might be offline temporarily, or the Internet or firewall settings may be incorrect. I have seen this error before on Windows 7 and remembered that I needed to disable the Microsoft Virtual WiFi Miniport Adapter. I navigated to the correct place but the Miniport Adapter was not there. After a bit of searching across several forums I saw a recommendation to change <logLevel>2</logLevel> to <logLevel>10</logLevel>. I attempted that and again it still did not work. I have also tried disabling my anti-virus protection and disabling Windows Firewall. Neither worked. I am not sure what else to try or do at this point. Are there any other recommendations to solve this issue? Notes: I am running Windows 8 Pro using the Creative Cloud Version of CS6. I check for updates by choosing Help Updates from within any Adobe Application This issue was not present in Windows 7 before I upgraded to Windows 8 I originally posted this question on the Adobe Forums and received no help there.

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • Notebook Operating System with extreme support cycles/security updates

    - by leto
    Hello there, after reading the announcements about Mac OS X "Lion" and Apples political decision, I've had enough. I'm a longtime Apple User since 1992, have always felt at home there, but am trying to switch to alternative Operating System since a year. I've also been working with Unix machines since 2001, so I'm looking in one of the free Unices or a Linux. Since I last looked at the desktop in 2002 choke much has changed, it seems. So I'm lost once more in the war between desktop environments and software. To be honest: I don't care what it's name is, I want to get my job done. Here's what I set me as landmark for an operating system/software to be considered: Has to be atleast four years old Has to supply security updates for current release for atleast a year Production quality stability for the whole desktop environment (!) No f****g commercial stuff that tends to supply me with privacy invading App Store or Cloud space So far I'm running a MacBook from 2007, 4 Gig memory, 250 Gig disk and I need: IMAPs for Mail since 1995 Webbrowser sic Shell Keeping current with Updates/Upgrades with no more than 5 Minutes spent in entering commands (makes it hard for OpenBSD ;-) ) A desktop filemanger would be nice, but is a bonus. What can you suggest as operating system? The one with the longest support cycles and best chance to survive the next 10 years will win a new user, even sending patches when needed :-) Greets

    Read the article

  • How to unblacklist an IP at Google?

    - by DJRayon
    I own a small business with two servers for webhosting. When setting up the primary (CentOS 5.5 + WHM, secondary is WHM DNS Only) server I kinda messed up the firewall, so the hackers could send stuff from my server. My primary IP is x.y.29.218. Anyway - I got blacklisted in several places, but now those blacklistings are gone. For a week or so, but Google still has my IP blacklisted. I handling serious damages because of that. Many clients want to switch from my hosting, etc. I've fixed the hole with CSF firewall SMTP_BLOCK option and enabled also the WHM SMTP TEAK Currently all I see from the Main Email View Mail Statistics (Errors section) in WHM is rows and rows of the following message removed-the-email-address-for-security R=lookuphost T=remote_smtp: SMTP error from remote mail server after end of data: host aspmx.l.google.com [a.b.39.27]: 550-5.7.1 [x.y.29.218 1] Our system has detected an unusual rate of\n550-5.7.1 unsolicited mail originating from your IP address. To protect our\n550-5.7.1 users from spam, mail sent from your IP address has been blocked.\n550-5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review\n550 5.7.1 our Bulk Email Senders Guidelines. h24si3868764fas.171 What are my options? I have one IP free. How can I configure Exim to send mail from that IP? My brain is like constantly blowing up because of this problem. Please someone, who has any knowledge how to deal with the current situation, please give me some kind of help - any help, suggestions, etc. I've tried everything I know, and I still don't know much, because this is the first time (I just started to webhost, etc) I deal with real physical servers not some kind of pre-setup VPS solution. Many - many thanks, whoever has time to offer some help.

    Read the article

  • What is a long-term strategy to deal with CPU fan dust in my home office?

    - by PaulG
    There are numerous discussions of CPU overheating and how sometimes this can be corrected by removing the dust from the CPU fan. I have read many of these, but I can't find anyone expressing a long-term strategy to deal with this problem. There are some suggestions here, for example, about how often the inside of the computer should be dusted. But I find this generally unsatisfactory. As it stands, in my rather dusty house (heated by a wood stove, with no central air circulation), I need to vacuum out the CPU fan every 3 to 4 months. At high CPU load, this can make a difference between 65C and 100C. I'm tired of hauling out the vacuum every time I anticipate high CPU load. What steps can I take to deal with this systematically in the long-term? Moving my high CPU load computing to the cloud is not a realistic option. Neither is vacuuming my home office more than once a week! (Details: my computer is on the floor in a Cooler Master HAF922 case, and uses an Intel CPU fan on an i7 chip) EDIT: While this would definitely solve the problem (submerging motherboard in mineral oil), it is a bit of an expensive solution.

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • Write permissions on uploaded files - PHP & Linux

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • SQL Server Remote Connections

    - by Barry
    Hi, I am at my wits end with trying to access a remote SQL Server 2008 R2 Express instance. Here are the following that I have tried. 1) I enabled remote connections in the instance properties. 2) I enabled sql server and windows authentication mode and created an account to log in using sql server authentication. 3) I started the SQL Server Browser service 4) I forwarded ports 1433 and 1434 on the router to the IP address of the machine hosting SQL Server. 5) I turned off firewalls on both the Machine running the instance and the router. 6) http://www.yougetsignal.com/tools/open-ports/ I used this to check whether or not both ports were open and it says that they are closed. I have the SQL Server Express instance running and the browser running. I have configured it to allow remote connections yet, it tells me they are both closed. I'm pretty confused at this stage. On the client Machine I am trying to connect using the following format machineip\SQLEXPRESS with SQL Server Management Studio Express. Thanks in advance

    Read the article

  • SQL Server Remote Connections

    - by Barry
    Hi, I am at my wits end with trying to access a remote SQL Server 2008 R2 Express instance. Here are the following that I have tried. 1) I enabled remote connections in the instance properties. 2) I enabled sql server and windows authentication mode and created an account to log in using sql server authentication. 3) I started the SQL Server Browser service 4) I forwarded ports 1433 and 1434 on the router to the IP address of the machine hosting SQL Server. 5) I turned off firewalls on both the Machine running the instance and the router. 6) http://www.yougetsignal.com/tools/open-ports/ I used this to check whether or not both ports were open and it says that they are closed. I have the SQL Server Express instance running and the browser running. I have configured it to allow remote connections yet, it tells me they are both closed. I'm pretty confused at this stage. On the client Machine I am trying to connect using the following format machineip\SQLEXPRESS with SQL Server Management Studio Express. Thanks in advance

    Read the article

  • nginx projects in subfolders

    - by Timothy
    I'm getting frustrated with my nginx configuration and so I'm asking for help in writing my config file to serve multiple projects from sub-directories in the same root. This isn't virtual hosting as they are all using the same host value. Perhaps an example will clarify my attempt: request 192.168.1.1/ should serve index.php from /var/www/public/ request 192.168.1.1/wiki/ should serve index.php from /var/www/wiki/public/ request 192.168.1.1/blog/ should serve index.php from /var/www/blog/public/ These projects are using PHP and use fastcgi. My current configuration is very minimal. server { listen 80 default; server_name localhost; access_log /var/log/nginx/localhost.access.log; root /var/www; index index.php index.html; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } } I've tried various things with alias and rewrite but was not able to get things set correctly for fastcgi. It seems there should be a more eloquent way than writing location blocks and duplicating root, index, SCRIPT_FILENAME, etc. Any pointers to get me headed in the right direction are appreciated.

    Read the article

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >