Search Results

Search found 15578 results on 624 pages for 'place and route'.

Page 556/624 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • TFS2010 Hangs “Waiting for Build Agent”

    - by Qpirate
    I have asked this question over on SO the link to the question is here but i am hoping this is a better place to ask it. I have 3 VM's each running the TFS Build Host Service 1 has 1 controller and 1 agent 2 have 2 Build Agents each. Most of the time (7\10 builds) it comes back with the following error message TF215097: An error occurred while initializing a build for build definition BUILD_DEFINITION: There was no endpoint listening at http://MACHINE1:9191/Build/v3.0/Services/Controller/14 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. and there is no errors when i do get this message. the following is the config file that i have created <configuration> <appSettings> <add key="traceWriter" value="true"/> </appSettings> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> <add name="API" value="4"/> <add name="Authentication" value="4"/> <add name="Authorization" value="4"/> <add name="Database" value="4"/> <add name="General" value="4"/> <add name="traceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> I do have my own custom activities in my build process but this does not seem to be a problem as sometimes the build actually does go. I have tried refreshing the template as some sites suggest. Has anyone come across a solution for this problem? or can anyone tell me how to catch these errors when they happen?

    Read the article

  • SSL support with Apache and Proxytunnel

    - by whuppy
    I'm inside a strict corporate environment. https traffic goes out via an internal proxy (for this example it's 10.10.04.33:8443) that's smart enough to block ssh'ing directly to ssh.glakspod.org:443. I can get out via proxytunnel. I set up an apache2 VirtualHost at ssh.glakspod.org:443 thus: ServerAdmin [email protected] ServerName ssh.glakspod.org <!-- Proxy Section --> <!-- Used in conjunction with ProxyTunnel --> <!-- proxytunnel -q -p 10.10.04.33:8443 -r ssh.glakspod.org:443 -d %host:%port --> ProxyRequests on ProxyVia on AllowCONNECT 22 <Proxy *> Order deny,allow Deny from all Allow from 74.101 </Proxy> So far so good: I hit the Apache proxy with a CONNECT and then PuTTY and my ssh server shake hands and I'm off to the races. There are, however, two problems with this setup: The internal proxy server can sniff my CONNECT request and also see that an SSH handshake is taking place. I want the entire connection between my desktop and ssh.glakspod.org:443 to look like HTTPS traffic no matter how closely the internal proxy inspects it. I can't get the VirtualHost to be a regular https site while proxying. I'd like the proxy to coexist with something like this: SSLEngine on SSLProxyEngine on SSLCertificateFile /path/to/ca/samapache.crt SSLCertificateKeyFile /path/to/ca/samapache.key SSLCACertificateFile /path/to/ca/ca.crt DocumentRoot /mnt/wallabee/www/html <Directory /mnt/wallabee/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <!-- Need a valid client cert to get into the sanctum --> <Directory /mnt/wallabee/www/html/sanctum> SSLVerifyClient require SSLOptions +FakeBasicAuth +ExportCertData SSLVerifyDepth 1 </Directory> So my question is: How to I enable SSL support on the ssh.glakspod.org:443 VirtualHost that will work with ProxyTunnel? I've tried various combinations of proxytunnel's -e, -E, and -X flags without any luck. The only lead I've found is Apache Bug No. 29744, but I haven't been able to find a patch that will install cleanly on Ubuntu Jaunty's Apache version 2.2.11-2ubuntu2.6. Thanks in advance.

    Read the article

  • Kernel-mode Authentication: 401 errors when accessing site from remote machines

    - by CJM
    I have several Classic ASP sites that use Integrated Windows Authentication and Kerberos delegation. They work OK on the live servers (recently moved to a Server 2008/IIS7 servers), but do not work fully on my development PC or my development server. The IIS on both machines were configured through an IIS web deployment tool package which was exported from an old machine; the deployment didn't work perfectly, and I had to tinker a bit to get the sites working. When accessing the apps locally on either machine, they work fine; when accessing from another machine, the user is prompted by a username/password dialog, and regardless of what you enter, ultimately it results in a 401 (Unauthorised) error. I've tried comparing the configuration of these machines against similar live servers (that all work fine), and they seem generally comparable (given that none of the live servers are yet on IIS7.5 (Windows 7/Server 2008 R2). These applications run in a common application pool which uses a special domain user as it's identity - this user has similar permissions on the live and development machines. On IIS6 platforms, to enable kerberos delegation, I needed to set up some SPNs for this user, and they are still in place (even though I don't believe they are needed any longer for IIS7+ due to kernel-mode authentication), Furthermore, this account is enabled for Kerberos delegation in Active Directory, as is each machine I am dealing with. I'm considering the possibility that the deployment might have made changes/failed to make changes to the IIS configuration thus causing this problem. Perhaps a complete rebuild (minus another web deployment attempt) would solve the problem, but I'd rather fix (thus understand) the current problem. Any ideas so far? I've just had another attempt at fixing this issue, and I've made some progress, but I don't have a complete fix...yet. I've discovered that if I access the sites via IP address (than via NetBIOS name), I get the same dialog, except that it accepts my credentials and thus the application works - not quite a fix, but a useful step. More interestingly, I discovered that if I disable Kernel-mode authentication (in IIS Manager Website Authentication Advanced Settings), the applications work perfectly. My foggy understanding is that this is effectively working in the pre-IIS7 way. A reasonable short-term solution, but consider the following explicit advice from IIS on this issue: By default, IIS enables kernel-mode authentication, which may improve authentication performance and prevent authentication problems with application pools configured to use a custom identity. As a best practice, do not disable this setting if Kerberos authentication is used in your environment and the application pool is configured to use a custom identity. Clearly, this is not the way my applications should be working. So what is the issue?

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • Preventing 'Reply-All' to Exchange Distribution Groups

    - by Larold
    This is another question in a short series regarding a challenging Exchange project my co-workers have been asked to implement. (I'm helping even though I'm primarily a Unix guy because I volunteered to learn powershell and implement as much of the project in code as I could.) Background: We have been asked to create many distribution groups, say about 500+. These groups will contain two types of members. (Apologies if I get these terms wrong.) One type will be internal AD users, and the other type will be external users that I create Mail Contact entries for. We have been asked to make it so that a "Reply All" is not possible to any messages sent to these groups. I don't believe that is 100% possible to enforce for the following reasons. My question is - is my following reasoning sound? If not, please feel free to educate me on if / how things can properly be implemeneted. Thanks! My reasoning on why it's impossible to prevent 100% of potential reply-all actions: An interal AD user could put the DL in their To: field. They then click the '+' to expand the group. The group contains two external mail contacts. The message is sent to everyone, including those external contacts. External user #1 decides to reply-all, and his mail goes to, at least, external user #2, which wouldn't even involve our Exchange mail relays. An internal AD user could place the DL in their Outlook To: field, then click the '+' button to expand the DL. They then fire off an email to everyone that was in the group. (But the individual addresses are listed in the 'To:' field.) Because we now have a message sent to multiple recipients in the To: field, the addresses have been "exposed", and anyone is free to reply-all, and the messages just get sent to everyone in the To: field. Even if we try to set a Reply-To: field for all of these DLs, external mail clients are not obligated to abide by it, or force users to abide by it. Are my two points above valid? (I admit, they are somewhat similar.) Am I correct to tell our leadership "It is not possible to prevent 100% of the cases where someone will want to Reply-All to these groups UNLESS we train the users sending emails to these groups that the Bcc: field is to be used at all times." I am dying for any insight or parts of the equation I'm not seeing clearly. Thank you!!!

    Read the article

  • ASA 5505 Vlan question

    - by Wayne
    I am setting up a cisco asa 5505 with the base license. I can communicate from inside-outside, outside-inside, inside-home, which is my desired traffic security. I can get http, ssh, and other access from inside-home, but I can't ping from inside-home (192.168.110.0 host to 192.168.7.1 or 192.168.7.0 host). Can someone explain. My config is listed below interface Vlan1<br> nameif inside<br> security-level 100<br> ip address 192.168.110.254 255.255.255.0 <br> !<br> interface Vlan2<br> nameif outside<br> security-level 0<br> pppoe client vpdn group birdie<br> ip address removedIP 255.255.255.255 pppoe <br> !<br> interface Vlan3<br> no forward interface Vlan1<br> nameif home<br> security-level 50<br> ip address 192.168.7.1 255.255.255.0 <br> ! <br> interface Ethernet0/0<br> switchport access vlan 2<br> ! <br> interface Ethernet0/1<br> ! <br> interface Ethernet0/2<br> ! <br> interface Ethernet0/3<br> ! <br> interface Ethernet0/4<br> switchport access vlan 3<br> ! <br> interface Ethernet0/5<br> shutdown <br> ! <br> interface Ethernet0/6<br> shutdown <br> ! <br> interface Ethernet0/7<br> shutdown <br> ! <br> ftp mode passive<br> clock timezone EST -5<br> clock summer-time EDT recurring<br> access-list Outside-In extended permit icmp any any <br> access-list Outside-In extended permit tcp any any eq www <br> access-list Outside-In extended permit tcp any any eq https <br> access-list Outside-In extended permit tcp any any eq 5969 <br> access-list inside_nat0_outbound extended permit ip any 192.168.111.0 255.255.255.224 <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.111.0 255.255.255.0 any <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.110.0 255.255.255.0 <br>any access-list inside_in extended permit icmp any any <br> access-list inside_in extended permit ip any any <br> access-list home_in extended permit icmp any any <br> access-list home_in extended permit ip any any <br> pager lines 24<br> logging enable<br> logging asdm informational<br> mtu inside 1492<br> mtu outside 1492<br> mtu home 1500 <br> ip local pool vpnuser 192.168.111.5-192.168.111.20<br> icmp unreachable rate-limit 1 burst-size 1<br> asdm image disk0:/asdm-524.bin<br> no asdm history enable<br> arp timeout 14400<br> nat-control <br> global (outside) 1 interface<br> nat (inside) 0 access-list inside_nat0_outbound<br> nat (inside) 1 0.0.0.0 0.0.0.0<br> nat (home) 1 192.168.7.0 255.255.255.0<br> static (inside,outside) tcp interface https 192.168.110.6 https netmask 255.255.255.255 <br> static (inside,outside) tcp interface www 192.168.110.6 www netmask 255.255.255.255 <br> static (inside,outside) tcp interface 5969 192.168.110.12 5969 netmask 255.255.255.255 <br> static (inside,home) 192.168.110.0 192.168.110.0 netmask 255.255.255.0 <br> access-group inside_in in interface inside<br> access-group Outside-In in interface outside<br> access-group home_in in interface home<br> route outside 0.0.0.0 0.0.0.0 RemovedIP 1<br>

    Read the article

  • How to conform to update-rc.d with LSB standard?

    - by user34881
    This is a migrated question from stackoverflow, as I was told, this is the place for it to be. http://stackoverflow.com/questions/2263567/how-to-conform-to-update-rc-d-with-lsb-standard I have set up a simple script to back up some directories. While I haven't had any problems setting up the functionality, I'm stuck with adding the script to rcX.d dir's using update-rc.d. My script: #! /bin/sh ### BEGIN INIT INFO # Provides: backup # Required-Start: backup # Required-Stop: # Should-Stop: # Default-Start: 0 6 # Default-Stop: # Description: Backs up some dirs ### END INIT INFO check_mounted() { # Check if HD is mounted } do_backup() { if check_mounted; then # Some rsync statements. fi } case "$1" in start) do_backup ;; restart|reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; stop|"") # No-op ;; *) echo "Usage: backup [start]" >&2 exit 3 ;; esac : Using update-rc.d backup start 10 0 6 . I get the following warnings and errors: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6.) do not match LSB Default-Stop values (none) update-rc.d: error: start|stop arguments not terminated by "." The syntax I try to use is the following: update-rc.d [-n] <basename> start|stop NN runlvl [runlvl] [...] . Google wasn't that helpful at finding a solution. How can I correctly set up a script and add it via update-rc.d? I'm using Ubuntu 9.10. UPDATE Using update-rc.d backup start 10 0 6 . stop 10 0 . the error disappears. The warnings about default values persists: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6 0 6) do not match LSB Default-Stop values (none) It even is added to the appropiate rcX-dirs but it still does not get executed...

    Read the article

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

  • Cannot Send Item error in Outlook - permissions to registry?

    - by Tim Alexander
    The issue I am trying to solve is to do with users getting a Cannot Send Item error in Outlook 2007 connecting to Exchange 2007. Basically if there is an image in the email (either one they have pasted in or one from another email in the chain) they get a "Cannot Send Item" error. Initially thought it was a citrix issue but users get it when they RDP to a server as well. Changing the message to Rich Text works 80% of the time but I do not think this is a solution but more of a temporary workaround. After some troubleshooting we found that the error can be fixed by adding the user as a member of the local power users group. of course this is not really a fix. My thoughts were that the ability of a power user to add/remove software may give them more access to the registry which might allow them to get round a restriction that is in place for a normal user. I have tried going through a procmon but the wealth of information is confusing. It initially looked like it may be an Outlook 2007 email security setting but this does not change between power user and normal user (set to 1 in the registry, "Use the security setting from Outlook Security Settings Public Folders"). I am struggling to fine tune my troubleshooting to work out exactly what is blocking it. Has anyone had an experience with an error similar to this? Or are there any tips for trying to track down issues via procmon as I must admit my approach seems somewhat lacking :) EDIT: So I have trawled through the two logs we have from process monitor (one as a power user and one a normal user). annoyingly I can find no obvious difference where something is denied access. There are more access denied events in the normal user log but these are quickly followed by sucessful entries to the same path fractions of a second later. The only thing that does stand out is an access denied to HKCR.html. This does not even appear in the power user version of the log. From what I understand this helps determine the default browser which ties in nicely with the fact that 9 out of 10 times you can send the message as Rich Text. EDIT: Looks like KB2509470 was causing the issue. Not really sure why but when I can work out what it does and why it causes the problem will post here unless anyone beats me to it!

    Read the article

  • My network drive disappears from Mac OS Finder

    - by Mariusz
    I have recently bought a Netgear WNDR3800 router to use it in my home network. But just the same day I installed it, I noticed a strange behaviour of Finder and iTunes. Let me explain it further. There is a Synology DS111 NAS attached to that router and two Macs with Mac OS X Lion. One of them is connected by a cable and the second one wirelessly. Before I changed my router to the new one I mentioned above, Finder always used to display my NAS on its sidebar. So I could just click its network name to access shared folders existing on it. But after I installed WNDR3800, I can no longer access the NAS that way. It is no longer displayed. I always have to mount it manually by typing its IP address using the Finder's 'connect to server' option. The same NAS supports TimeMachine backups and has an inbuilt DLNA server. And the same situation here. I can't perform a backup because my NAS is no longer accessible in TimeMachine preferences. iTunes does not display it as well (as a multimedia server) even though it used to before I installed that router. What's important, everything works fine for a couple of minutes after I restart the router or the NAS. Or even when I change the NAS's IP address it becomes accessible again in Finder, TimeMachine and iTunes, but only for some time. Both the Mac computers I mentioned behave the same way. And all those issues have been taking place sice I installed that new router. Before I did that, everything had worked fine. My old router was Netgear WGR614v10. Would you be so kind to tell me what you think could possibly be the reason of that behaviour? What settings of the router should I look closer at? I'm not a network specialist, but is it possible that some network packets are blocked for some reason? I will be grateful for any clues you give me. Thank you.

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

  • Virtual Private Hosting DNS configuration

    - by Ciel
    I did a great deal of reading here before posting this because I didn't want to post a duplicate - but I'm on a bit of a deadline and getting frustrated, so here goes... I very, very, very sincerely apologize if this is long winded or hard to read. Please - please just ask for any information or clarification and I will give it as quickly as I possibly can. This has become very frustrating to me and this is the last place I know to turn. I have no experience with setting up DNS, no experience with nameservers, and no peers to go to for help. So this is kind of my last ditch effort. The task of setting up a private server has, through circumstances beyond my control, fallen into my lap. I own a domain (hereafter referred to as yyy.com) and have always used shared hosting - I buy a package and just point it to the domain nameservers they give me. It's always been simple. yyy.com is registered with network solutions Now I have purchased a Virtual Private Hosting package from GoDaddy.com - and it comes with Plesk 11. I have no earthly idea how to begin to get the right nameserver for yyy.com. I have gone through the instructions and have wound up exceedingly frustrated. I have 2 IP addresses from GoDaddy for the server. This is what I have so far, and I cannot tell if it is working (Since propogation takes so long, it is extremely hard to test for me) IP 1 : XX.XX.XX.XX IP 2 : YY.YY.YY.YY (obviously hidden for privacy) Now after going through the documentation setup and waiting a few days, this is the setup I have - and so far it does not appear to be working. Host Record type Value XX.XX.XX.XX / 24 PTR yyy.com. yyy.com. NS ns1.yyy.com. yyy.com. A XX.XX.XX.XX yyy.com. MX (10) mail.yyy.com. ftp.yyy.com. CNAME yyy.com. ipv4.yyy.com. A XX.XX.XX.XX mail.yyy.com. A XX.XX.XX.XX mssql.yyy.com. A XX.XX.XX.XX ns1.yyy.com. A XX.XX.XX.XX ns2.yyy.com. A YY.YY.YY.YY webmail.yyy.com. A XX.XX.XX.XX www.yyy.com. CNAME yyy.com. yyy.com is pointing to both ns1.yyy.com and ns2.yyy.com Can anyone give me some assistance here? This is a learning experience for me and days of documentation have left me very confused.

    Read the article

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Ubuntu cannot access internet, LAN is fine

    - by Kevin Southworth
    I have an Ubuntu 8.04 LTS server that is directly connected to our Comcast Business Gateway modem and I have configured it with 1 of our 5 allotted Static IPs. My other machines on our LAN can connect to this server (via ssh, web, ping, etc.) but I cannot access this server from outside our network, and this machine cannot get out to the internet either (ping google.com fails with unknown host). Here is my /etc/networking/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 173.162.54.19 netmask 255.255.255.248 broadcast 173.162.54.23 gateway 173.162.54.22 and my /etc/resolv.conf: nameserver 68.87.77.130 nameserver 68.87.72.130 output from sudo route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 173.162.54.16 0.0.0.0 255.255.255.248 U 0 0 0 eth0 0.0.0.0 173.162.54.22 0.0.0.0 UG 100 0 0 eth0 I have a Windows 2008 machine with an almost identical Static IP, static DNS setup and it works correctly, can access it within the LAN and also from public internet, the Windows machine and the Ubuntu machine are both directly connected to the Comcast Business Gateway. I have tried rebooting Ubuntu, rebooting my Comcast modem, but nothing seems to make it work. I'm an Ubuntu noob, is there some other config I need to apply to make this work? UPDATE: Yes I am able to ping my default gateway 173.162.54.22 output of iptables --list -n: Chain INPUT (policy DROP) target prot opt source destination ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination ufw-before-forward all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-forward all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ufw-before-output all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-output all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-forward (1 references) target prot opt source destination LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK FORWARD]: ' RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-input (1 references) target prot opt source destination RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:137 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:138 RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK INPUT]: ' RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-output (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-forward (1 references) target prot opt source destination ufw-user-forward all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-input (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED DROP all -- 0.0.0.0/0 0.0.0.0/0 ctstate INVALID ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 4 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 12 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:67 dpt:68 ufw-not-local all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 224.0.0.0/4 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 224.0.0.0/4 ufw-user-input all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-output (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW,RELATED,ESTABLISHED ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW,RELATED,ESTABLISHED ufw-user-output all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-not-local (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type MULTICAST RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type BROADCAST LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK NOT-TO-ME]: ' DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-forward (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-input (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:22 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-output (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0

    Read the article

  • Cisco PIX firewall blocking inbound Exchange email

    - by sumsaricum
    [Cisco PIX, SBS2003] I can telnet server port 25 from inside but not outside, hence all inbound email is blocked. (as an aside, inbox on iPhones do not list/update emails, but calendar works a charm) I'm inexperienced in Cisco PIX and looking for some assistance before mails start bouncing :/ interface ethernet0 auto interface ethernet1 100full nameif ethernet0 outside security0 nameif ethernet1 inside security100 hostname pixfirewall domain-name ciscopix.com fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names name 192.168.1.10 SERVER access-list inside_outbound_nat0_acl permit ip 192.168.1.0 255.255.255.0 192.168.1.96 255.255.255.240 access-list outside_cryptomap_dyn_20 permit ip any 192.168.1.96 255.255.255.240 access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq 3389 access-list outside_acl permit tcp any interface outside eq ftp access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq https access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq www access-list outside_acl permit tcp any interface outside eq 993 access-list outside_acl permit tcp any interface outside eq imap4 access-list outside_acl permit tcp any interface outside eq 465 access-list outside_acl permit tcp any host 213.xxx.xxx.xxx eq smtp access-list outside_cryptomap_dyn_40 permit ip any 192.168.1.96 255.255.255.240 access-list COMPANYVPN_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list COMPANY_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list outside_cryptomap_dyn_60 permit ip any 192.168.1.96 255.255.255.240 access-list COMPANY_VPN_splitTunnelAcl permit ip 192.168.1.0 255.255.255.0 any access-list outside_cryptomap_dyn_80 permit ip any 192.168.1.96 255.255.255.240 pager lines 24 icmp permit host 217.157.xxx.xxx outside mtu outside 1500 mtu inside 1500 ip address outside 213.xxx.xxx.xxx 255.255.255.128 ip address inside 192.168.1.1 255.255.255.0 ip audit info action alarm ip audit attack action alarm ip local pool VPN 192.168.1.100-192.168.1.110 pdm location 0.0.0.0 255.255.255.128 outside pdm location 0.0.0.0 255.255.255.0 inside pdm location 217.yyy.yyy.yyy 255.255.255.255 outside pdm location SERVER 255.255.255.255 inside pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_outbound_nat0_acl nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx 3389 SERVER 3389 netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx smtp SERVER smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx https SERVER https netmask 255.255.255.255 0 0 static (inside,outside) tcp 213.xxx.xxx.xxx www SERVER www netmask 255.255.255.255 0 0 static (inside,outside) tcp interface imap4 SERVER imap4 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface 993 SERVER 993 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface 465 SERVER 465 netmask 255.255.255.255 0 0 static (inside,outside) tcp interface ftp SERVER ftp netmask 255.255.255.255 0 0 access-group outside_acl in interface outside route outside 0.0.0.0 0.0.0.0 213.zzz.zzz.zzz timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout sip-disconnect 0:02:00 sip-invite 0:03:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ max-failed-attempts 3 aaa-server TACACS+ deadtime 10 aaa-server RADIUS protocol radius aaa-server RADIUS max-failed-attempts 3 aaa-server RADIUS deadtime 10 aaa-server RADIUS (inside) host SERVER *** timeout 10 aaa-server LOCAL protocol local http server enable http 217.yyy.yyy.yyy 255.255.255.255 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server community public no snmp-server enable traps floodguard enable sysopt connection permit-ipsec crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto dynamic-map outside_dyn_map 20 match address outside_cryptomap_dyn_20 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 40 match address outside_cryptomap_dyn_40 crypto dynamic-map outside_dyn_map 40 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 60 match address outside_cryptomap_dyn_60 crypto dynamic-map outside_dyn_map 60 set transform-set ESP-3DES-MD5 crypto dynamic-map outside_dyn_map 80 match address outside_cryptomap_dyn_80 crypto dynamic-map outside_dyn_map 80 set transform-set ESP-3DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map client authentication RADIUS LOCAL crypto map outside_map interface outside isakmp enable outside isakmp policy 20 authentication pre-share isakmp policy 20 encryption 3des isakmp policy 20 hash md5 isakmp policy 20 group 2 isakmp policy 20 lifetime 86400 telnet 217.yyy.yyy.yyy 255.255.255.255 outside telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 217.yyy.yyy.yyy 255.255.255.255 outside ssh 0.0.0.0 255.255.255.0 inside ssh timeout 5 management-access inside console timeout 0 dhcpd address 192.168.1.20-192.168.1.40 inside dhcpd dns SERVER 195.184.xxx.xxx dhcpd wins SERVER dhcpd lease 3600 dhcpd ping_timeout 750 dhcpd auto_config outside dhcpd enable inside : end I have Kiwi SysLog running but could use some pointers in that regard to narrow down the torrent of log messages, if that helps?!

    Read the article

  • a couple of questions about proxy server,vpn & how they works

    - by Q8Y
    I have a couple of questions that are related to security. Correct me if i'm wrong :) If I want to request something (ex: visiting www.google.com): my computer will request that then it will to the ISP then to my ISP proxy server that will take the request and act as a middle man in this situation ask for the site (www.google.com) and retrieve it then the proxy will send it back to me. I know that its being done like that. So, my question is that, in this situation my ISP knows everything and what I did request, and the proxy server is set by default (when I ask for an internet subscription). So, if I use here another proxy (lets assume that is a highly anonymous and my ISP can't detect my IP address from it), would I visit my ISP and then from my ISP it will redirect me to the new proxy server that I provide? Will it know that there is someone using another proxy? Or will it go to another network rather than my ISP? Because I didn't get the view clearly. This question is related to the first one. When I use a VPN, I know that VPN provides for me a tunneling, encryption and much more features that a proxy can't. So my data is travelling securely and my ISP can't know what I'm doing. But my questions are: From where is the tunneling started? Does it start after I visit the ISP network (since they are the one that are responsible for forwarding my data and requests)? If so, then not all my connection is tunneled in this way, there is a part that is not being tunneled. Since, every time I need to do anything I have to go to my ISP and ask to do that. Correct me if I misunderstand this. I know that VPN can let my computer be virtually in another place and access its resources (ex: be like in my office while I'm in my home. This is done via VPN). If I use a VPN service provider so that I can access the internet securely and without being monitored by my ISP. In this case, where is my encrypted data saved? Is it saved in my ISP or in the VPN service provider? If I use a VPN, does anyone on the internet know what I'm doing or who I am? Even the VPN service provider? Can they know me? I think they should know the person that is asking for this VPN service, am I right?

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • ZFS Recover from Faulted Pool State

    - by nickv2002
    I have a six disk ZFS raidz1 pool and had a recent failure requiring a disk replacement. No problem normally, but this time my server hardware died before I could do the replacement (but after and unrelated to the drive failure as far as I can tell). I was able to get another machine from a friend to rebuild the system, but in the process of moving my drives over I had to swap their cables around a bunch until I got the right configuration where the remaining 5 good disks were seen as online. This process seems to have generated some checksum errors for the pool/raidz. I have the 5 remaining drives set up now and a good drive installed and ready to take the place of the drive that died. However, since my pool state is FAULTED I'm unable to do the replacement. root@zfs:~# zpool replace tank 1298243857915644462 /dev/sdb cannot open 'tank': pool is unavailable Is there any way to recover from this error? I would think that having 5 of the 6 drives online would be enough to rebuild the right data, but that doesn't seem to be enough now. Here's the status log of my pool: root@zfs:~# zpool status tank pool: tank state: FAULTED status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM tank FAULTED 0 0 1 corrupted data raidz1-0 ONLINE 0 0 8 sdd ONLINE 0 0 0 sdf ONLINE 0 0 0 sdh ONLINE 0 0 0 1298243857915644462 UNAVAIL 0 0 0 was /dev/sdb1 sde ONLINE 0 0 0 sdg ONLINE 0 0 0 Update (10/31): I tried to export and re-import the array a few times over the past week and wasn't successful. First I tried: zpool import -f -R /tank -N -o readonly=on -F tank That produced this error immediately: cannot import 'tank': I/O error Destroy and re-create the pool from a backup source. I added the '-X' option to the above command to try to make it check the transaction log. I let that run for about 48 hours before giving up because it had completely locked up my machine (I was unable to log in locally or via the network). Now I'm trying a simple zpool import tank command and that seems to run for a while with no output. I'll leave it running overnight to see if it outputs anything.

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >