Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 466/500 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • Why would a PCI scan fail because of components that are not even installed?

    - by Brandon
    Recently a PCI scan was run against a web server and the result was a failure. Some of the issues could be fixed, however others simply make no sense to me. The machine was a clean install, there are only two things running, the .NET 3.5 website and the dotDefender web application firewall. However there are several errors similar to: Web server vulnerability Impact: /servlet/SessionServlet: JRun or Netware WebSphere default servlet found. All default code should be removed from servers. Risk Factor: Medium/ CVSS2 Base Score: 6.4 CVE: CVE-2000-0539 I'm not sure what this is, but I can't find anything on the server that looks anything like this. Web server vulnerability Impact: /some.php?=PHPE9568F35- D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests that contain specific QUERY strings. Risk Factor: Medium/ CVSS2 Base Score: 5.0 PHP is not installed. Trying to add that query string to any page does nothing because the application ignores it. And doing that phpVersion check results in a 404. Similar to this, there are dozens of errors related to JSP and Oracle that are also not installed. Web server vulnerability Impact: /admin/database/wwForum.mdb: Web Wiz Forums pre 7.5 is vulnerable to Cross-Site Scripting attacks. Default login/pass is Administrator/letmein Risk Factor: Medium/ CVSS2 Base Score: 4.0 There are several errors like this, telling me that Web Wiz Forums, Alan Ward A-Cart 2.0, IlohaMail, etc. are all vulnerable. These are not installed or referenced anywhere I can find. There are even references to pages that simply don't exist, like OpenAutoClassifieds. Can anyone point me in the right direction as to why these errors are showing up or where I might look to find these components if they are in fact installed? Note: This website and server are for a subdomain of the main website. The main website runs on a server that is running Apache/PHP, but I don't have access to that server. The report says the subdomain was the site being scanned, but is it possible for it to have scanned the main site as well?

    Read the article

  • CentOS 5.8 dig is not resolving ip-address

    - by travisbotello
    I'm running centos 5.8 on a local machine at home. Today I was trying to analyze the DNS-Lookup via dig. $ dig +trace -t A www.heise.de. This is giving me something like this as a response de. 172800 IN NS f.nic.de. de. 172800 IN NS z.nic.de. de. 172800 IN NS s.de.net. de. 172800 IN NS n.de.net. de. 172800 IN NS a.nic.de. de. 172800 IN NS l.de.net. ;; Received 344 bytes from 192.58.128.30#53(192.58.128.30) in 49 ms In contrast my dedicated CentOS machine is returning the following de. 172800 IN NS a.nic.de. de. 172800 IN NS n.de.net. de. 172800 IN NS f.nic.de. de. 172800 IN NS z.nic.de. de. 172800 IN NS l.de.net. de. 172800 IN NS s.de.net. ;; Received 344 bytes from 192.58.128.30#53(j.root-servers.net) in 32 ms As you can see, the last line is different. Any idea why my dedicated machine is giving me the host name of the responding DNS-Server and my local machine is only returning the ip-address? Thanks in advance UPDATE The reverse DNS-Lookup is working without any problems. Also, I just checked this on my local mac and...exactly the same problem occurs. Is it possible that this has to do with the local router/modem/ISP?

    Read the article

  • Strange traceroute to msdn.microsoft.com

    - by Jasper
    The problem is I could not view any msdn.microsoft.com/* site and the main site itself on my Ubuntu box on Google Chrome browser. Error is: Error 101 (net::ERR_CONNECTION_RESET): The connection was reset. When I run traceout I get different result: Here is simple one: traceroute msdn.microsoft.com traceroute to msdn.microsoft.com (65.55.11.235), 30 hops max, 60 byte packets 1 10.0.0.138 (10.0.0.138) 0.121 ms 0.131 ms 0.128 ms 2 192.168.0.1 (192.168.0.1) 1.730 ms 1.724 ms 2.024 ms 3 bzq-179-37-1.static.bezeqint.net (212.179.37.1) 18.314 ms 19.277 ms 20.694 ms 4 bzq-218-227-250.red.bezeqint.net (81.218.227.250) 22.806 ms 23.651 ms 24.820 ms 5 bzq-179-75-198.static.bezeqint.net (212.179.75.198) 26.650 ms 27.533 ms 28.791 ms 6 * * * 7 bzq-179-124-122.static.bezeqint.net (212.179.124.122) 76.032 ms 72.968 ms 74.660 ms 8 igblmdistc7504.uk.msft.net (195.66.224.140) 75.708 ms 76.797 ms 78.257 ms 9 ge-5-1-0-0.lts-64cb-1a.ntwk.msn.net (207.46.42.227) 80.125 ms 81.336 ms 82.671 ms 10 ge-7-0-0-0.nyc-64cb-1a.ntwk.msn.net (207.46.47.20) 179.232 ms so-7-1-0-0.ash-64cb-1b.ntwk.msn.net (213.199.144.158) 162.508 ms 163.223 ms 11 xe-0-0-1-0.co1-96c-1b.ntwk.msn.net (207.46.45.29) 227.964 ms ge-7-0-0-0.co1-64c-1b.ntwk.msn.net (207.46.40.90) 228.226 ms xe-0-0-1-0.co1-96c-1b.ntwk.msn.net (207.46.45.29) 212.781 ms 12 10.22.8.54 (10.22.8.54) 215.046 ms xe-5-2-0-0.co1-96c-1a.ntwk.msn.net (207.46.40.167) 214.825 ms 10.22.8.58 (10.22.8.58) 213.251 ms 13 10.22.8.62 (10.22.8.62) 212.745 ms 213.827 ms 10.22.8.50 (10.22.8.50) 215.655 ms 14 10.22.8.62 (10.22.8.62) 211.665 ms !X 10.22.8.50 (10.22.8.50) 214.491 ms !X 10.22.8.54 (10.22.8.54) 218.471 ms !X Line 1,2 : It's me Line from 3-7: It's my Internet provider Line 8 and on: I think I hit MS servers WTF line 12-14 ????? 10.22.8.x ???? then I run this traceroute: sudo traceroute -T msdn.microsoft.com traceroute to msdn.microsoft.com (65.55.11.235), 30 hops max, 60 byte packets 1 10.0.0.138 (10.0.0.138) 0.109 ms 0.127 ms * 2 * * * 3 * * * 4 * * * 5 * * * 6 * 65.55.11.235 (65.55.11.235) 16.019 ms 17.364 ms So I hit MSDN web site already at 6 hop ! WTF ??? This is host -a msdn.microsoft.com from me: host -a msdn.microsoft.com Trying "msdn.microsoft.com" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19522 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;msdn.microsoft.com. IN ANY ;; ANSWER SECTION: msdn.microsoft.com. 3274 IN CNAME msdn.microsoft.akadns.net. msdn.microsoft.akadns.net. 600 IN A 65.55.11.235 Received 91 bytes from 127.0.0.1#53 in 108 ms Could someone help me understand and fix it ??

    Read the article

  • Baseline / Benchmark Physical and virtual server performance

    - by EyeonTech
    I am setting up a new server and there are some options. I want to perform some benchmarks and I need your help in determining the best tools and if possible run pre-configured benchmarks designed for SQL servers on Windows Server 2008/2012. Step 1. Run a performance monitor on the current Live SQL server (Windows Server 2008 Virtual machine running on ESXi. New server Hardware rundown: Intel® Server System R1304BTLSHBN - 1U Rack, LGA1155 http://ark.intel.com/products/53559/Intel-Server-System-R1304BTLSHBN Intel Xeon E3-1270V2 2x Intel SSD 330 Series 240GB 2.5in SATA 6Gb/s 25nm 1x WD 2TB WD2002FAEX 2TB 64M SATA3 CAVIAR BLACK 4x 8GB 1333MHz DDR3 ECC CL9 DIMM There are several options for configurations and I want to benchmark some of them and share the results. Option 1. Configure 2x SSDs at RAID 0. Install Windows Server 2008 directly to the 2TB WD Caviar HDD. Store Database files on the RAID 0 Volume. Benchmark the OS direct on the hardware as an SQL Server. Store SQL Backup databases on the 2TB WD Caviar HDD. Option 2. Configure 2x SSDs at RAID 0. Install Windows Server 2012 directly to the 2TB WD Caviar HDD. Install Hyper-V. Install the SQL Server (Server 2008) as a virtual machine. Store the Virtual Hard Disks on the SSDs. Option 3. Configure 2x SSDs at RAID 0. Install VMWare ESXi on a partition of the 2TB WD Caviar HDD. Install the SQL Server (Server 2008) as a virtual machine. Store the Virtual Hard Disks on the SSDs. I have a few tools in mind from http://technet.microsoft.com/en-us/library/cc768530(v=bts.10).aspx. Any tools with pre-configured test would be fantastic. Specifically if there are pre-configured perfmon sets avaliable. Any opinions on the setup to gain the best results is welcome. Thanks in advance.

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • Best way to handle PHP sessions across Apache vhost wildcard domains

    - by joshholat
    I'm currently running a site that allows users to use custom domains (i.e. so instead of mysite.com/myaccount, they could have myaccount.com). They just change the A record of their domain and we then use a wildcard vhost on Apache to catch the requests from the custom domains. The setup is basically as seen below. The first vhost catches the mysite.com/myaccount requests and the second would be used for myaccount.com. As you can see, they have the exact same path and php cookie_domain. I've noticed some weird behavior surrounding the line below "#The line below me". When active, the custom domains get a new session_id every page load (that isn't the same as the non-custom domain session). However, when I comment that line out, the user keeps the same session_id on each page load, but that session_id is not the same as the one they'd see on a non-custom domain site either despite being completely on the same server. There is a sort of "hack" workaround involving redirecting the user to mysite.com/myaccount, getting the session ID, redirecting back to myaccount.com, and then using that ID on the myaccount.com. But that can get kind of messy (i.e. if the user logs out of mysite.com/myaccount, how does myaccount.com know?). For what it's worth, I'm using a database to manage the sessions (i.e. so there's no issues with being on different servers, etc, but that's irrelevant since we only use one server to handle all requests currently anyways). I'm fairly certain it is related to some sort of CSRF browser protection thing, but shouldn't it be smart enough to know it's on the same server? Note: These are subdomains, they're separate domains entirely (but on the same server). <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName mysite.local ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost> #Wildcard (custom domain) vhost <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName default ServerAlias * ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" # The line below me php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost>

    Read the article

  • Windows Server 2008 Migration - Did I miss something?

    - by DevNULL
    I'm running in to a few complications in my migration process. My main role has been a Linux / Sun administrator for 15 yrs so Windows server 2008 environment is a bit new to me, but understandable. Here's our situation and reason for migrating... We have a group of developers that develop VERY low-level software in Visual C with some inline assembler. All the workstations were separate from each other which cased consistency problems with development libraries, versions, etc... Our goal was to throw them all on to a Windows domain were we can control workstation installations, hot fixes (which can cause enormous problems), software versions, etc... All Development Workstations are running Windows XP x32 (sp3) and x64 (sp2) I running in to user permission problems and I was wondering maybe I missed one, tWO or a handful of things during my deployment. Here is what I have currently done: Installed and Activated Windows Server 2008 Added Roles for DNS and Active Directory Configured DNS with WINS for netbios name usage Added developers to AD and mapped their shared folders to their profile Added roles for IIS7 and configured the developers SVN Installed MySQL Enterprise Edition for development usage Not having a firm understanding of Group Policy I haven't delved deeply in to that realm yet. Problems I'm encountering: 1. When I configure any XP workstations to logon our domain, once a user uses their new AD login, everything goes well, except they have very restrictive permissions. (Eg: If a user opens any existing file, they don't have write access, except in their documents folder.) Since these guys are working on low system level events, they need to r/w all files. All I'm looking to restrict in software installations. Am I correct to assume that I can use WSUS to maintain the domains hot fixes and updates pushed to the workstations? I need to map a centralized shared development drive upon the users login. This is open to EVERYONE. Right now I have the users folders mapped upon login through their AD profile. But how do I map a share if I've already defined one within their profile in AD? Any responses would be very grateful. Do I have to configure and define a group policy for the domain users? Can I use Volume Mirroring to mirror / sync two drives on two separate servers or should I just script a rsync or MS Synctool? The drives simply store nightly system images.

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Subversion/Hudson/Sonar/Artifactory - too much for my little server to handle! Help!

    - by Ricket
    I have a little dedicated server. It's at a cheap price and has a simple AMD 1800+ (1.5ghz), 256mb DDR RAM, ...need I continue? And I think I'm overloading it already. I have installed the following, and it's running CentOS 5.4: Webmin Apache MySQL Subversion as an Apache module Hudson (standalone) Sonar (standalone, runs with a standalone Jetty install) Artifactory (standalone) That's pretty much it. But I'm having problems; pages are loading quite slowly. Network speed of the server is excellent, but I think I'm just running out of CPU and/or memory. A side-effect of the pages loading slowly is that sometimes Hudson times out, not being able to start Maven or contact Sonar in a certain amount of time. I think the next step to speed things up might be to move to an application server and use the WAR version of Hudson, Sonar and Artifactory together on that server. I don't know that it will help, but it just seems to make sense, especially with Sonar running on its own Jetty install and the other two probably running their own mini application servers as well. Am I correct in thinking this? Is this the right course of action? Any other tips on how to make the server run faster? I can post more data if you'd like, just let me know what else would help you answer my question. Oh, also just to cure any suspicions, I don't have any sort of virus or spyware. I protect my SSH access with DenyHosts (which has blocked 300+ brute forcers in the past few months), and I have confirmed that the top four processes in terms of memory and CPU usage are Sonar, Artifactory, Hudson, and MySQL. Edit: I just thought of another thing that I'd like you to comment on as well: Apache currently has 8 spawned slave processes, taking 42MB of ram apiece. This is not my web server. Is everything else able to function if I shut down Apache? Can you point me towards a tutorial or something on migrating Subversion from Apache into something that might work along with the other three applications, maybe even make Subversion a WAR file or something?

    Read the article

  • Changing Physical Path gives blank homepage

    - by Julie
    I have two websites ASP Classic - www.company.com and www.companytesting.com. At this time of year, company.com is pointed to a folder called website2012 and companytesting.com is pointing to a folder called website2013. The contents of those two folders are almost identical, just minor changes for our season change (which I was supposed to do today - lol). Up until a couple of weeks ago, I was running Windows Server 2003. To update the "live" website, I'd make a copy of the test site folder, and rename it website2013R1, and point the test site there, then point the live site at website2012. We now have Windows Server 2008 R2 64. (I had someone migrate the websites to the new server for me.) The companytesting.com site, when I pointed it to website2013R1, worked fine. The company.com site, when I pointed it to website2013 (which worked just before, for the companytesting.com site) gives an empty page. (i.e. view source = nothing there.) There is nothing in the failed request log when this happens. I can use the Explore button/link (upper right) in IIS7.5 and see all of the files there. If I use the browse button (either in general or on the index.asp page) I get the blank page again. One weirdness about how these are set up is that companytesting.com uses a login (which I think is windows authentication - it's simply a single username and password for staff, and to keep the GoogleBots out of it). Obviously, company.com does not. But redirecting the to website2013r1 kept the login in place. (So I'm not absolutely clear whether that's attached to the folder or to the site. Hitting the company.com site after changing the path did not yield a password request.) The permissions on the folders all seem to be the same, but obviously, I'm missing something. Why isn't changing the physical path working? As is probably obvious, I'm not knowledgeable about servers. I did OK in 2003, but since it's not my main task and I'm buried right now, I have barely looked at 2008. So I may have really stupid questions when you ask me to check something.

    Read the article

  • SMB returns the entire file instead of header info

    - by billdlawson
    Starting a section of code checks for access to many data files (flat files so each table is a file) and when I do a packet capture, in our capture only the header info is sent by the server to the client. However I have one Customer who is using a SAN that gets the whole file instead of just the header info,and besides just being slower, this is causing file access issues. They have already turned off OPLOCKS at the server and at the workstations. This is not client server. The data files and the application reside on the server but the users run the application locally via a shortcut with a mapped drive or UNC. So when I simply select an option that prompts for a vehicle number, not tryng to select a record but rather simply verify the datafiles are accessible, that window opens in 1-2 seconds for me. When they do the same thing it takes 6-15 seconds after there several users are running the program. Maximum number of users is 15. The program has a lot of small modules, 800 .cob modules. So it is very chatty but these are datafiles. We have Wireshark captures that show he's pulling the whole file and we're just getting the header. Thier capture vs ours. We suspect the SAN. Has anyone ever heard of a SAN improperly interpreting runtime requests? So an SMB request. This is Acucobol-GT (now Microfocus). The application is written in COBOL. This is not a new program just a new problem. This is one customer of over a thousand who are otherwise running smoothly and we are totally stumped. All XP users, the server is Windows 2003 (with Virtual server) and I don't yet know the SAN info. Also we have many installations running virtual servers but only few on SANs or we just don't know it. This is not a network throught put issue, the load is less than 5% on the server and theer are no timeout or retransmits. PS If it wasn't for Wireshark I'd still be chasing my tail. An application trace file on thier installation just looks like they run slower. If you want the Wireshark trace file I can make it available. Thanks in advance - Please excuse my verbosity (word?) but I'm not sure what's relavent.

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • php-fpm or nginx: bad gateway

    - by John Tate
    I'm getting a bad gateway error all the sudden for a site. I didn't change the configuration for the site, I just added a new server config where I put them under /etc/nginx/servers and it stopped working. The new server works, and there is no conflict between the php-fpm listen addresses. server { listen 80; server_name obfuscated.onion; location = / { root /var/www/sites/obfuse; index index.php; } location / { root /var/www/sites/obfuse; index index.php; if (!-f $request_filename) { rewrite ^(.*)$ /index.php?q=$1 last; break; } if (!-d $request_filename) { rewrite ^(.*)$ /index.php?q=$1 last; break; } } error_page 404 /index.php; location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { root /var/www/sites/obfuse; access_log off; expires 30d; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sites/obfuse$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } There is nothing unusual in php-fpm's log even when I raised the level to debug. [24-Jun-2013 09:10:37.357943] DEBUG: pid 6756, fpm_scoreboard_init_main(), line 40: got clock tick '100' [24-Jun-2013 09:10:37.358950] DEBUG: pid 6756, fpm_event_init_main(), line 333: event module is kqueue and 1 fds have been reserved [24-Jun-2013 09:10:37.358978] NOTICE: pid 6756, fpm_init(), line 83: fpm is running, pid 6756 [24-Jun-2013 09:10:37.359009] DEBUG: pid 6756, main(), line 1832: Sending "1" (OK) to parent via fd=5 [24-Jun-2013 09:10:37.389215] DEBUG: pid 6756, fpm_children_make(), line 421: [pool cyruserv] child 22288 started [24-Jun-2013 09:10:37.391343] DEBUG: pid 6756, fpm_children_make(), line 421: [pool cyruserv] child 21911 started [24-Jun-2013 09:10:37.391914] DEBUG: pid 6756, fpm_event_loop(), line 362: 5776 bytes have been reserved in SHM [24-Jun-2013 09:10:37.391941] NOTICE: pid 6756, fpm_event_loop(), line 363: ready to handle connections [24-Jun-2013 09:10:38.393048] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [24-Jun-2013 09:10:39.403032] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [24-Jun-2013 09:10:40.413070] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 I don't know why this has started happening, but the logs are not telling me anything. Please ask for more information than this, you'll probably need it.

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • Are random packets normal?

    - by TheLQ
    About a month ago on one of my servers I started receiving random packets from IPs all over the world. So I did the smart thing and stopped putting off installing an IDS. This IDS is a ClearOS Gateway which comes with Snort and SnortSam. I enabled it, checked There is a total of 4 ports open, two of which forward to the server I'm talking about. These ports are 3724 and 8085, so they aren't going to be easily detected in a port scan. However checking some logs of this server I found that the attack is resuming. I found this ... Accepting connection from '75.166.155.122' [Auth] got unknown packet from '75.166.155.122' Accepting connection from '98.164.154.93' [Auth] got unknown packet from '98.164.154.93' Ping MySQL to keep connection alive Accepting connection from '70.241.195.129' [Auth] got unknown packet from '70.241.195.129' Accepting connection from '67.182.229.169' [Auth] got unknown packet from '67.182.229.169' Accepting connection from '69.137.140.38' [Auth] got unknown packet from '69.137.140.38' Accepting connection from '76.31.72.55' [Auth] got unknown packet from '76.31.72.55' Accepting connection from '97.88.139.39' [Auth] got unknown packet from '97.88.139.39' Accepting connection from '173.35.62.112' [Auth] got unknown packet from '173.35.62.112' Accepting connection from '187.15.10.73' [Auth] got unknown packet from '187.15.10.73' Accepting connection from '66.66.94.124' [Auth] got unknown packet from '66.66.94.124' Accepting connection from '75.159.219.124' [Auth] got unknown packet from '75.159.219.124' Accepting connection from '99.102.100.82' [Auth] got unknown packet from '99.102.100.82' Accepting connection from '24.128.240.45' [Auth] got unknown packet from '24.128.240.45' Accepting connection from '99.231.7.39' [Auth] got unknown packet from '99.231.7.39' Accepting connection from '206.255.79.56' [Auth] got unknown packet from '206.255.79.56' Accepting connection from '68.97.106.235' [Auth] got unknown packet from '68.97.106.235' Accepting connection from '69.134.67.251' [Auth] got unknown packet from '69.134.67.251' Accepting connection from '63.228.138.186' [Auth] got unknown packet from '63.228.138.186' Accepting connection from '184.39.146.193' [Auth] got unknown packet from '184.39.146.193' Accepting connection from '69.171.161.102' [Auth] got unknown packet from '69.171.161.102' Accepting connection from '76.0.47.228' [Auth] got unknown packet from '76.0.47.228' Ping MySQL to keep connection alive Accepting connection from '126.112.201.14' [Auth] got unknown packet from '126.112.201.14' Ping MySQL to keep connection alive Now that scares me. Why isn't Snort detecting this? How were they able to find this specific port? More importantly, what normally would these packets contain? Is this something I should be worried about? How can I stop this?

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • Pxe net install Centos with Static IP

    - by Stu2000
    I seem to be unable to perform a kickstart installation of centos5.8 with a netinstall. It correctly gets into the text installer, but keeps sending out a request for the dhcp server and failing. I have tried to manually set the IP everywhere. Here is my pxelinux.cfg file DEFAULT menu PROMPT 0 MENU TITLE Ubuntu MAAS TIMEOUT 200 TOTALTIMEOUT 6000 ONTIMEOUT local LABEL centos5.8-net kernel /images/centos5.8-net/vmlinuz MENU LABEL centos5.8-net append initrd=/images/centos5.8-net/initrd.img ip=192.168.1.163 netmask=255.255.255.0 hostname=client101 gateway=192.168.1.1 ksdevice=eth0 dns=8.8.8.8 ks=http://192.168.1.125/cblr/svc/op/ks/profile/centos5.8-net MENU end and here is my kickstart file: # Kickstart file for a very basic Centos 5.8 system # Assigns the server ip: 192.211.48.163 # DNS 8.8.8.8, 8.8.4.4 # London TZ install url --url http://mirror.centos.org/centos-5/5.8/os/i386 lang en_US.UTF-8 keyboard us network --device=eth0 --bootproto=static --ip=192.168.1.163 --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8,8.8.4.4 --hostname=client1-server --onboot=on rootpw --iscrypted $1$Snrd2bB6$CuD/07AX2r/lHgVTPZyAz/ firewall --enabled --port=22:tcp authconfig --enableshadow --enablemd5 selinux --enforcing timezone --utc Europe/London bootloader --location=mbr --driveorder=xvda --append="console=xvc0" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work part /boot --fstype ext3 --size=100 --ondisk=xvda part pv.2 --size=0 --grow --ondisk=xvda volgroup VolGroup00 --pesize=32768 pv.2 logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=528 --grow --maxsize=1056 logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow %packages @base @core @dialup @editors @text-internet keyutils iscsi-initiator-utils trousers bridge-utils fipscheck device-mapper-multipath sgpio emacs Here is my dhcp file: ddns-update-style interim; allow booting; allow bootp; ignore client-updates; set vendorclass = option vendor-class-identifier; subnet 192.168.1.0 netmask 255.255.255.0 { host tower { hardware ethernet 50:E5:49:18:D5:C6; fixed-address 192.168.1.163; option routers 192.168.1.1; option domain-name-servers 8.8.8.8,8.8.4.4; option subnet-mask 255.255.255.0; filename "/pxelinux.0"; default-lease-time 21600; max-lease-time 43200; next-server 192.168.1.125; } } Is it impossible to prevent it asking for a dynamic ip before trying to install from the net? Perhaps there is an error in of my files? My dhcp server is set to ignore client-updates, and is set to only works with one mac address whilst testing.

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Dangers of Running Computers w/o Air Conditioning

    - by Daniel Bingham
    I recently moved in to an apartment with out air conditioning. This is fine most of the time as I am in upstate New York. It only ever gets above the high 70s during the hottest of the summer months. And when it does, I'm stubborn enough that I'll just deal with wearing minimal clothing around the house. However, I'm worried about my computers. I'm a software developer and gamer, so many of my machines are very high powered. And at least one of them is a server that must be left on 24/7 (not just a game server - also serves multiple websites). I've never before had to worry about the heat too much, as I always lived in buildings with central air. The in building temperature rarely got much above 70 F. All of the machines I built had good enough air cooling that I never saw a problem. Now the temperature in building is pushing 100F and I'm worried that the machines will not be able to keep themselves cool enough by simply blowing already hot air over themselves. The hottest of them I've turned off. However, the server I cannot. It's an old Dell (not custom build) that runs on a Pentium 4 (2.2GHz). It only has a single hard drive, integrated video. And it'd not running any processor intensive servers. Just basic LAMP. It used to run a MUD server, but that's off for now. So it should be idling most of the time. I haven't been able to find any sort of built in temperature sensors in the hardware... at least not any that the programs I've found in the Debian repository can read. And it's an inherited machine to which I do not have the full specs, so I don't know the tolerances anyway. How worried should I be about it melting down on me? How worried should I be about the hard drive melting or becoming corrupted? To generalize the question for other people, what are the safe temperature tolerances for most machines. How widely does it vary, and how does one go about determining when their machine is running too hot and needs to be shut down?

    Read the article

  • Troubleshooting an NFS server hanging after authenticated mount request

    - by Christoph
    I need some advice on troubleshooting an NFS server problem on Scientific Linux (RHEL) 6.1. The log on the server shows that an authenticated mount request was made: Jan 13 16:30:02 ??? rpc.mountd[3996]: authenticated mount request from ????:784 for /shared-storage/cm/shared (/shared-storage/cm/shared) But after that, it does not continue. On the client, it is also hanging. The interesting thing now is that I have two NFS servers, which should be identical, and the one is working perfectly, but the other exhibits the above mentioned behaviour. The problem is also not completely persistent, i. e. sometimes the mount request succeeds. I assume that the problem must be related to the server rather than to the client, because it is working perfectly on the other server. My question is where I should search the problem. I have already re-created the exports using exportfs -r, I have restarted the NFS server, I have compared the rpcinfo outputs of both server - no success. The problem even survives a reboot. Any other ideas are appreciated. As answer to Tim's question: I have sporadically the following in dmesg, but do not know whether it is related e1000e 0000:0c:00.0: eth4: Detected Hardware Unit Hang: TDH <24> TDT <25> next_to_use <25> next_to_clean <24> buffer_info[next_to_clean]: time_stamp <1c3d12940> next_to_watch <24> jiffies <1c3d12940> next_to_watch.status <0> MAC Status <80383> PHY Status <792d> PHY 1000BASE-T Status <7800> PHY Extended Status <3000> PCI Status <10> Further edit: The problem above does not occur on the machine that is working, so it probably is related. Again an edit: The error is not on the (software) device that is used for NFS, but on another one. The NFS mount also does not trigger the message.

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >