Search Results

Search found 3558 results on 143 pages for 'hosted'.

Page 111/143 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Exchange 2010 certificate errors

    - by Frederik Nielsen
    I have a problem with my newly setup Exchange environment for our hosted customers. First off, when configuring the outlook client, it gives a certificate warning although the certificate has been bought and setup. I am using a setup like this: autodiscover.CUSTOMERDOMAIN.TLD CNAME autodiscover.exchange.COMPANYDOMAIN.TLD (Companydomain is our company that hosts the exchange servers, customerdomain being the customers domain) Shouldn't that work? I know that Microsoft does something like that for Office365, but I really don't think they buy a certificate for every customer.. So I guess some redirection should be setup somehow - any guidance? Next thing: When we accept that error, and move on to actually starting Outlook, it states that the certificate is not valid for the RPC proxy server exchange.COMPANYDOMAIN.TLD - this domain is not right, as that domain is not included in the certificate. I would instead like this domain to be mail.exchange.COMPANYDOMAIN.TLD I tried to run this script setting both internal and external URL's to be the same, with no luck. Any guidance on this one? I am running Exchange 2010 SP2, with CAS, HT and MBX split up on 3 different servers.

    Read the article

  • Remove directory from URL IIS 7.5

    - by xalx
    I've tried to find a solution to this and found some guides out there but none seem to work. I have the following URL - http://www.mysite.com/aboutus.html However there are some other sites which link to my old hosted site and point to http://www.mysite.com/nw/aboutus.html. My issue here is trying to remove the 'nw' directory from the URL's. I have setup the following URL Rewrite in IIS but it does not seem to do anything, <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Redirect all to root folder" enabled="true" stopProcessing="true"> <match url="^nw$|^/nw/(.*)$" /> <conditions> </conditions> <action type="Redirect" url="nw/{R:1}" /> </rule> <rule name="RewriteToFile"> <match url="^(?!nw/)(.*)" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Any insight would be appreciated.

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • Windows/IIS Hosting :: How much is too much?

    - by bsisupport
    I have 4 Windows 2003 servers running IIS 6. These servers host a bunch of unique web sites (in that they are all different in build/architecture/etc). The code behind these sites range from straight HTML, classic ASP, and 1.1/2.0/3.x flavors of .NET. Some (most) of the sites use a SQL backend, which is hosted on one or two different servers – not the IIS servers themselves. No virtualization on these servers and no load balancing for these particular sites. The problem I’m running into is coming up with some baseline metrics to determine, or basically come up with a “baseline score” to know when a web server has reached its hosting limit. Today, some basic information about each server is used: how much bandwidth does the server pump out, hard drive space availability, and basic (very basic) RAM & CPU utilization (what it looks like at peak traffic times.) I would be grateful if those of you that are 1000x smarter than I am could indulge me with your methods of managing IIS environments. Whether performance monitoring specifics, “score” determination as I’m trying to determine, or the obvious combination of both. Thanks in advance.

    Read the article

  • pptp server 2003 hands out gateway from nic not dhcp server

    - by Pete
    I have created a pptp RRAS server for a handful of clients to connect to. I would like them to use the servers default gateway (.1) for internet access. They are able to successfully connect (& see LAN) but it then cuts them off the internet. I understand that all internet traffic would be routed through the pptp server but that's ok since I have enough pipe. The problem seems to be that: the clients gateway shows as their assigned RAS ip. The clients assigned DNS settings seem to be what is set to the servers nic not what I have specified in dhcp (which is the same server). DHCP relay agent properties points to the nic DHCP is running on (192.168.100.163). .1 is gateway in nic hw properties & dhcp. I have different dns secondary & third entries on my nic properties than what dhcp is configured for. The problem is that I have a 10.10.1.x network that people can not see if they uncheck the gateway option but, they are then unable to see our other hosted sites on the internet.

    Read the article

  • Not able to access Silverlight.net and ONLY Silverlight.net - All other domains work!

    - by Sootah
    Alrighty folks, I have an extremely odd problem. I am able to surf the web fine with one odd (and really annoying at the moment) exception: Microsoft's Silverlight.net. Every other site that I go to works just fine. This is quite frustrating because I'm in the middle of programming a web app in Silverlight 4.0, and whenever I do a search for any code examples, tutorials, or whatnot at least 50% of the results are hosted in the silverlight.net forums. The error message that I get is: Oops! Google Chrome could not find www.silverlight.net It doesn't work in my other browsers either (both IE and FireFox). What's odd, is that while the error message would lead me to assume it's a DNS error, I can ping the URL just fine. C:\Users\The Doot>ping silverlight.net Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Ping statistics for 206.72.125.201: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 103ms, Maximum = 110ms, Average = 106ms I've checked my HOSTS file, and there's nothing that refers to ANY Microsoft URL in there. What could be causing this!?? More importantly, how do I fix it? Just for kicks, I've even included the results of a traceroute here for your enjoyment. OS: Windows 7 Ultimate Thanks in advance! -Sootah

    Read the article

  • Server side url scanner for malware, spyware , viruses and protect my visitors

    - by Vangel
    I have a forum/groups site that contains a lot of external URLs, sometimes direct download links. I want to protect my visitors from possible attacks from malware sites as they are mot likely to click on these links. CUrrently I implement DBL (spamhaus) but thats not enough. I want to run a background task to check the outgoing links first. I have looked at similar questions in StackOverflow (wrongly posted there) and here but fail to find a question same as mine or a good answer. People have suggested ClamAV , I don't believe it can detect Web hosted malware sites and its has a lot of missed detection. I have looked at google safe browsing service ( http://code.google.com/apis/safebrowsing/developers_guide_v2.html very complicated to implement or maintain plus midway I get lost :S ) I can go for commercial solution, anything to protect the visitors and my site brand. But I would like to hear the opinion of server admins and if anyone has implemented such a service. My Server is basic CentOS LAMP stack. thank you very much in advance.

    Read the article

  • Error in mysql "max_allowed_packet" VPS godaddy

    - by focusmantra
    I am getting error "Serious session error detected. Please notify administrator, this problem is most probably caused by small value in max_allowed_packet MySQL setting. " This error generally comes after every 20-25 minutes and when it comes , it logs out the user and then logs in again, starts again and then after sometime the same issue occurs again. I tried changing max_allowed_packet setting but getting error "access denied; You need SUPER privilege for this operation'. I even tried SET SESSION too but error "SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value" I have hosted the website on godaddy VPS centos and access it via putty or cpanel. Website is made in moodle 2.0.3 i.e. php. My developers use to fix this but warned will occur when server restart. As godaddy ppl say move to dedicated and then i can do but as I don't have any money so can't at present. I trying to find how developers used to do for temporary fix that is until server restart.

    Read the article

  • While using an ntfs smb share for mac users, do symbolic links and extended attributes work?

    - by scape
    We have a majority of mac users but we'd rather support their file sharing using a Windows server with an ntfs drive, or at least a Linux server with ext3. We've had trouble, much trouble, utilizing the OS X server software and after the years are now looking to abandon it. What's mostly holding us back is the fact that the mac users very often utilize symbolic links and other special features that exist for an HFS+ partition. The shared locations are mostly primary storage and not just used as an archive storage location. While there is an option to create symbolic links under ntfs, I'm curious if there is anything I need to look out for if I were to move the files over to a new partition that's hosted from a Windows server from the HFS+ partition; in addition, how well creating a symbolic link from a mac might work. I am also worried about windows backup software and if it will ruin these special sym links, and how placing permissions on sub-folders will work. Alternatively I could remotely backup the files using a mac and Bru, nonetheless I still want to get away from mac server for hosting the shares.

    Read the article

  • Apache + Tomcat: Which one should handle SSL? IP-based proxy forwarding?

    - by delirial
    We currently have a Tomcat application running with SSL on port 443. Right now we have an apache server that accepts http requests on port 80 and redirects to the Tomcat instance: <VirtualHost *:80> ServerName domain.com ServerAlias domain.com <LocationMatch "/"> Redirect permanent / https://domain.com/ </LocationMatch> </VirtualHost> Tomcat is handling SSL, because there's no proxy, just a simple redirect to the SSL port: <Connector port="443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/app/ssl/domain_com.jks" keystorePass="ourpassword" clientAuth="false" sslProtocol="TLS"/> We want to begin using the apache web server as a proxy and additionally, do per-IP redirects to certain apps that should only be used by hosts on a pre-determined IP range. We would also like to redirect IPs that don't match the pre-determined list to a static html page hosted on the apache server. My first question is: Should I continue to handle SSL on Tomcat's end, or should I use apache with SSL while forwarding to an "unprotected" tomcat port? Is there any way to redirect to different apps (and potentially hosts) depending on the incoming IP? thanks, del

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • Google Apps, SPF, softfail problem (validates with validation tools, but still softfails otherwise)

    - by mq.chen
    Hi, I guess this is probably a commonly asked and boring question but I'm really at a loss and I don't know what else to do. This might be a duplicate of other questions, but none of the solutions worked for me. I've Googled around and read just about anything I could find but I'm still puzzled as to why it doesn't work. The gist of my problem is that I have set-up Google Apps for a client of mine with the domain fintan.dk. Everthing works just excellent, except emails sent from *@fintan.dk (either with the Gmail web-interface or desktop client) to a non-Google Apps email gets a softfail (I have sent to my University email, an email hosted at MediaTemple and even Hotmail). The emails gets a pass when sent to a Google Apps or Gmail address though... (All emails from that domain are sent via email clients.) So this is what I have done so far: I've added the SPF record Google recommended (v=spf1 include:_spf.google.com ~all), waited several days hoping it would a DNS update delay problem. Now, three days later there is no change. I have verified the settings in the desktop clients several times. I have validated the records with validation tools like the SPF Query Tool, [email protected] and [email protected]. All of them validate and gives a pass, saying there shouldn't be a problem, but strangely there still is. So, I really don't know what else to do. Any help is very much appreciated. Thank you in advance!

    Read the article

  • Strange issue with 74.125.79.118

    - by Domenic
    I'm facing with a strange issue on a Linux server. After frequent crashes the analysis found that the server is led to collapse by a huge number of connections to the ip 74.125.79.118 departing from php scripts of the hosted web sites. After a depth analysis of the files I'm found that are not present any malware infections. Ip 74.125.79.118 is Google. I realize after a Google search that the connections to this ip are generated by embedded video from youtube on web sites, among other Google features like safe search. But I don't understand how this type of behavior can lead to the collapse the server and the uniqueness of the situation leads me to think that the situation is far from being attributable only to Google and Youtube. Also I've found that blocking connections from eth0 to 74.125.79.118:80 doesn't solve the issue but if I stop DNS traffic from eth0 to internet, connections to 74.125.79.118 stops. I'm really confused about this. Any suggestions? Cheers.

    Read the article

  • Licensing SQL Server 2012 Reporting Services w/ SharePoint 2010

    - by Evan M.
    Here's my situation: I have 1 VM that is running SharePoint 2010 SP1. I have a different physical server that is running SQL Server 2008 R2 that hosts all the configuration and content database for SharePoint. Now, we want to start providing BI capabilities to our users with SharePoint and SQL Server. With it's new features, 2012 is the obvious way to go. To support this, I'm looking to build a new VM that will have SQL Server 2012 installed w/ Analysis services and SSIS, which will be the platform that gets our data from our Oracle databases, puts it in a warehouse hosted by the SQL 2012 instance, and is put into cubes. What's getting me about the platform is licensing for Reporting Services and PowerPivot. My plan was to install SSRS and PowerPivot on the current SharePoint server. But my understanding of the licensing means that instead of the new SQL server being licensed, I'd have to license both new server, and the SharePoint server. Conversely, I could install SharePoint onto the SQL server, and only have to get a second SP license, but then I'd have the added complexity of deploying a separated application server, and combines my data and application servers. Is my licensing understanding correct, or can I have SSRS and PowerPivot installed separately without incurring additional licensing costs?

    Read the article

  • How do I change the default ftp folder in MacOS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called AutoUpdate. Clicking an autoupdate button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • Apache vhosts config: Host Name instead of IP Address

    - by Johe Green
    I have a domain (example.com) hosted at an external provider. I directed the subdomain sub.example.com to my ubuntu server (12.04 with apache2). On my ubuntu server I have a vhost setup like this. The rest of the config is basically apache 2 standard: <VirtualHost *:80> ServerName sub.example.com ServerAlias sub.example.com ServerAdmin [email protected] DocumentRoot /var/www/sub.example.com ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined WSGIScriptAlias / /home/application/sub.example.com/wsgi.py <Directory /home/application/sub.example.com> <Files wsgi.py> Order allow,deny Allow from all </Files> </Directory> </VirtualHost> When I enter http://sub.example.com in my browser my application shows up fine. But the domain is replaced by the IP address of my server. Do I have to configure my server somewhere else to deliver all its content under my domain sub.example.com?

    Read the article

  • Linux: Send mail to external mail box from a server with that user's hostname

    - by dtbarne
    I've got sendmail running on a linux box. Let's say the hostname of the box is bar.com. If I run the following command, I don't receive the email (which is hosted with a third party), presumably due to the hostname pointing to the local machine. echo "Test Body" | mail -s "Test Subject" [email protected] Is there any way to get this to work so that I can receive emails at my third party email address even though it has the same hostname? Do I have to change the hostname of this server (not preferred)? It may be worth noting that I created a user "foo" on my machine and noticed that the mailbox for that account is empty. I noticed these log entries, which may or may not be relevant: Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: from=apache, size=80, class=0, nrcpts=1, msgid=<[email protected]>, relay=apache@localhost Jun 28 01:09:48 bar sendmail[14339]: p5S59mIA014339: from=<[email protected]>, size=293, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.$ Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: [email protected], ctladdr=apache (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30080, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5S59mIA$ Jun 28 01:09:48 bar sendmail[14340]: p5S59mIA014339: to=<[email protected]>, ctladdr=<[email protected]> (48/48), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30495, dsn=2.0.0, stat=Sent

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • How large is the performance loss for a 64-bit VirtualBox guest running on a 32-bit host?

    - by IllvilJa
    I have a 64-bit Virtualbox guest running Gentoo Linux (amd64) and it is currently hosted on a 32-bit Gentoo laptop. I've noticed that the performance of the VM is very slow compared to the performance of the 32-bit host itself. Also when I compare with another 32-bit Linux VM running on the same host, performance is significantly less on the 64-bit VM. I know that running a 64-bit VM on a 32-bit host does incur some performance penalties for the VM, but does anyone have any deeper knowledge of how large a penalty one might expect in this scenario, roughly speaking? Is a 10% slowdown something to expect, or should it be a slowdown in the 90% range (running at 1/10 the normal speed)? Or to phrase it in another way: would it be reasonable to expect that the performance improvement for the 64-bit VM increases so much that it is worth reinstalling the host machine to run 64-bit Gentoo instead? I'm currently seriously considering that upgrade, but am curious about other peoples experience of the current scenario. I am aware that the host OS will require more RAM when running in 64-bit, but that's OK for me. Also, I do know that one usually don't run a 64-bit VM on a 32-bit server (I'm surprised I even got the VM started in the first place) but things turned out that way when I tried to future proof the VM I was setting up and decided to make it 64-bit anyway.

    Read the article

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • How to fix GMail time stamps in Outlook?

    - by SWB
    One of my email accounts is hosted at an ISP with unreliable IMAP support, and I can't change it. Fortunately, I have my personal email set up on Google Apps for Domains, so I created another GMail account there and turned on GMail's features that allow me to send and receive mail through the ISP account using GMail ("Send mail as" and "Get mail from other accounts" in GMail settings on the Accounts tab). I'm now using Outlook to retreive mail from the GMail account through IMAP, which in turn is retreiving mail from the ISP account through POP3. This basically works great, except for one very significant issue: Prior to setting this up, I already had several months of mail in the ISP account that I had been accessing via IMAP. GMail grabbed all of this mail via POP3 at, let's say, noon on April 5. In GMail's web interface (and on my iPod touch, and in Mozilla Thunderbird), all is well: the messages are all shown with their original time stamps. But when Outlook downloads these messages from GMail via IMAP, the time stamps are all set to noon on April 5 (the time GMail downloaded them from the ISP via POP3). That's not good, especially since we're talking about hundreds of messages here over a time span of several months. How can I fix this and get Outlook to display the original time stamps?

    Read the article

  • Proper configuration for Windows SMTP Virtual Server to only send email from localhost, and tracking down source of spam emails

    - by ilasno
    We manage a server that is hosted on Amazon EC2, which has web applications that need to be able to send outgoing email. Recently we received a notice from Amazon about possible email abuse on that server, so i've been looking into it. It's Windows Server Datacenter (2003, i guess), and uses SMTP Virtual Server (you know, the one that requires IIS 6 for admin). The settings on the Access tab are as follows: - Authentication: Anonymous - Connection: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) - Relay: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) In the SMTP logs there are many entries like the following: 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 FROM: 0 0 4 0 26364 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 TO: 0 0 4 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26707 SMTP - - - - ([email protected] is sending quite a lot of emails :-/) Can anyone confirm if the SMTP server settings seem correct? I'm also wondering if a web application on the machine could be exposing a contact form or something that would allow this sort of abuse, looking into that (and how to look into that) further.

    Read the article

  • DirectAdmin Centos4 server has virus

    - by Rogier21
    Hello all, I have a problem with a webserver that runs Centos4 with DirectAdmin. Since a few weeks some websites hosted on it are not redirecting on search engines properly, they are redirected to some malware site, resulting in a ban from google. Now I have used 3 virusscanners: ClamAV: Didn't find anything Bitdefender: Found a 2-3 files with JS infection, deleted them AVG: Finds lots of files, but doesn't have the option to clean! The virus that it finds is: JS/Redir JS/Dropper Still the strange thing is: website a (www.aa.com) does not have any infected files (have gone through all the files manually, is a custom PHP app, nothing special) but does still have the same virus. Website b (www.bb.com) does have the infected files as only one. I deleted all these files and suspended the account, but no luck, still the same error. I do get the log entries on the website from the searchengines so the DNS entries are not changed. But now I have gone through the httpd files but cannot find anything. Where can I start looking for this?

    Read the article

  • IPTABLES syntax help to forward Remote Desktop requests to a VM [CentOS host]

    - by NVRAM
    I've a VM running MSWindows XP hosted on my CentOS 5.4 machine. I can rdesktop into it from the hosting machine and work just fine using the private ddress (192.168.122.65), but I now need to allow Remote Desktop access from other computers (not just the machine hosting the VM). [Edit] I only need to allow access for a day or so, so don't want to add a NIC (for XP activation reasons). Could someone help me with the iptables syntax? The VM is on a private/virtual network: 192.168.122.65 and my CentOS machine is on a physical network, at 10.1.3.38 (and 192.168.122.1 as the GW for the virtual net). I found this question, but none of the answers seemed to work and I'm a bit timid at blindly trying variations. My FORWARD rules are as listed. Thanks in advance. # iptables -L FORWARD Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable RH-Firewall-1-INPUT all -- anywhere anywhere [Edit] If I do play "blindly" is there a simple way to reset the settings on CentOS (a la service network restart)?

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >