Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 499/537 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • Building nginx 1.0.4 on Amazon EC2 micro - perl and python problems

    - by digitaltoast
    I'd like to run nginx as a reverse proxy with apache2 on my EC2 micro instance. yum install nginx gives me nginx-0.8.53-1.2.amzn1.x86_64.rpm The current nginx is 1.0.4 I found and followed this guide: http://kdn2.info/2011/05/install-nginx-on-amazon-ec2/ It works fine up to and including "make". When I get to checkinstall --fstrans=no I get ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. make[1]: Leaving directory `/root/src/nginx-1.0.4' ======================== Installation successful ========================== Copying documentation directory... ./ ./CHANGES ./LICENSE ./README cp: cannot stat `//var/tmp/gRWoVgIcdbmjfTjoVGBM/newfiles.tmp': No such file or directory Copying files to the temporary directory...OK Striping ELF binaries and libraries...OK Compressing man pages...OK Building file list...OK Building RPM package... FAILED! *** Failed to build the package ...and the logfile is full of: Building target platforms: x86_64 Building for target x86_64 Processing files: nginx-1.0.4-1.x86_64 error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr/doc There IS /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/ but no /usr Following further down the page, it says: "If we want to use, for example, PHP 5.2 we can download PHP and Nginx compatible with Amazon Kernel(Xen Kernel) from the CentosALT Repository." So I install the two repositories, but when I yum install http://centos.alt.ru/pub/nginx/1.0/RPMS/x86_64/nginx-stable-1.0.4-1.el5.x86_64.rpm I get Error: Package: nginx-stable-1.0.4-1.el5.x86_64 (/nginx-stable-1.0.4-1.el5.x86_64) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem but that doesn't fix it. When I do yum update, I get --> Finished Dependency Resolution Error: Package: python-distribute-0.6.19-10.1.x86_64 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 Error: Package: python-distribute-0.6.19-10.1.i586 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 I've tried everything - yum clean all and various other suggestions found on other sites. If anyone has any suggestions or a known package of the current 1.04 nginx working on EC2 Micro (Linux ip-10-56-63-85 2.6.35.11-83.9.amzn1.x86_64 #1 SMP Sat Feb 19 23:42:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux - which I think is RHEL 5?) then I'd be grateful. Incidentally, does this repolist look right? repo id repo name status CentALT CentALT Packages for Enterprise Linux 5 - x86_64 enabled: 112+157 amzn-main amzn-main-Base enabled: 2,706 amzn-main-debuginfo amzn-main-debuginfo disabled amzn-main-nosrc amzn-main-nosrc disabled amzn-updates amzn-updates-Base enabled: 328 amzn-updates-debuginfo amzn-updates-debuginfo disabled amzn-updates-nosrc amzn-updates-nosrc disabled devel_languages_python Python and Python Modules (SLE_10) enabled: 1,452+768 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 5,892+604 epel-debuginfo Extra Packages for Enterprise Linux 5 - x86_64 - Debug disabled epel-source Extra Packages for Enterprise Linux 5 - x86_64 - Source disabled epel-testing Extra Packages for Enterprise Linux 5 - Testing - x86_64 disabled epel-testing-debuginfo Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Debug disabled epel-testing-source Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Source disabled s3tools Tools for managing Amazon S3 - Simple Storage Service (RHEL_6) enabled: 2+1 repolist: 10,492

    Read the article

  • Ubuntu web server 11.10 ftp/server issue

    - by Nate
    I was wondering if I could get some help with FTP, atleast I'm pretty sure it has to do with FTP. Although it could have to do with something else, I'm not 100% sure.. Now, for fare warning, I'm no ubuntu dominator, I'm pretty newb. Anyway, I've attempted to build a webserver to to test php and what not for a site I'm building. Now everything works, the php, the sql etc. By the way, I built this in VMware, so it's virtual, over a network, so I can access stuff from anywhere. I'm in a college right now so yeah. The one problem I have is this. I go into the terminal, and do ifconfig to find my IP. I get it and go to a browser on a different machine and type that IP in. I get the "index of/" page, where I can browse the website I'm making. I can click through folders and what not. I can click on things and they open up. Now lets say I'm working on my desktop and open up an FTP and drag and drop something into there, go to the IP in the browser again and try to open it. I either get "Server error The website encountered an error while retrieving http://my_server_ip/phpinfo.php. It may be down for maintenance or configured incorrectly. Here are some suggestions: Reload this webpage later." or "Forbidden You don't have permission to access /html.html on this server." But, lets say I make it on the server itself, and try, bam, magic it works. I'm sure I set the permissions to let everyone open and view the files, but maybe I didn't? I'm not sure, and this is where I was hoping I could get some help. By the way, I followed a tutorial on changing the www folder (apache) from /var/www to home/"user"/www. I can't recall how I did that, but it's there and my ftp goes to the home/"user"/www folder. But yeah, any and all help is appreciated. Like I said, I'm really new to this, but I do enjoy attempting to make these servers and learning how they work, so it's not like making this webserver is a project for a class, It's just assisting me in testing stuff for another class and possibly other websites later on down the road. Anyway, anyone who decides to help, thanks so much, I'd really appreciate it. Nate. P.S. I'm using Ubuntu 11.10 desktop edition with a LAMP server

    Read the article

  • Connectivity issues with dual NIC machine in EC2

    - by Matt Sieker
    I'm trying to get some servers set up in EC2 in a Virtual Private Cloud. To do this, I have two subnets: 10.0.42.0/24 - Public subnet 10.0.83.0/24 - Private subnet To bridge these two, I have a Funtoo instance with a pair of NICs: eth0 10.0.42.10 eth1 10.0.83.10 Which has the following routing table: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.83.0 * 255.255.255.0 U 0 0 0 eth1 10.0.83.0 * 255.255.255.0 U 203 0 0 eth1 10.0.42.0 * 255.255.255.0 U 202 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default 10.0.42.1 0.0.0.0 UG 0 0 0 eth0 default 10.0.42.1 0.0.0.0 UG 202 0 0 eth0 An elastic IP is attached to the eth0 interface, and I can connect to it fine remotely. However, I cannot ping anything in the 10.0.83.0 subnet. For now iptables is not set up on the box, so there's no rules that would get in the way (Eventually this will be managed by Shorewall, but I should get basic connectivity done first) Subnet details from the VPC interface: CIDR: 10.0.83.0/24 Destination Target 10.0.0.0/16 local 0.0.0.0/0 [ID of eth1 on NAT box] Network ACL: Default Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY   CIDR: 10.0.83.0/24 VPC: Destination Target 10.0.0.0/16 local 0.0.0.0/0 [Internet Gateway ID] Network ACL: Default (replace) Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY I've been trying to work this out most of the evening, but I'm just stuck. I'm either missing something obvious, or am doing something very wrong. I would think I'd be able to ping from either interface on this box without issue. Hopefully some more pairs of eyes on this configuration will help. EDIT: I am an idiot. After I bothered to install nmap to run some more tests, I discover I can see the ports, and connect to them, pings are just being blocked.

    Read the article

  • DNS problems with Google on Windows 7

    - by awishformore
    Hello dear superuser community. I had no idea where to post this because it is a problem that completely baffles me. I have a lot of experience with network configuration, but I am completely out of ideas on how to fix this. I have a Fritz!Box branded router on my ISP 1&1 in Germany. My computer is connected to it with a normal Ethernet cable. I always manually set my IP on the computer and use the Google DNS servers for name resolution. I also tried OpenDNS and the result is the same. With that configuration the following happens: Google search responds with big delay Gmail, Google Calendar & Google Drive requests time out the majority of the time In order to troubleshoot, I set the network connection to DHCP for both IP & DNS. At that point, what happens is the following: Google search times out most of the time Gmail, Google Calendar & Google Drive work most of the time Sometimes, it happens that the sites that time out will come up, but weirdly enough, the pictures on the buttons will be missing. For instance, the magnifying glass on Google will be gone or the circle arrow on Gmail (but all buttons of course). All other websites load just fine - and very quickly. All other network functionality is completely unimpacted. The behaviour of fixed IP & Google DNS vs automatic IP & DNS is easily reproducible. I am going crazy trying to fix this as I have no idea what the hell is going on at this point. Here a list of the things I have tried thus far: Flushed the DNS Tried on different browsers (Works fine on my laptop by the way) Tried disabling Teredo & IPv6 stack Emptied all caches Checked the HOSTS file Rebooted the router Reset the router Reinstalled the network adapter Tracert displayes normal route until timing out at one point Ping usually doesn't work for the unreachable sites either Ran both complete Norton 360 & Kaspersky 2012 scan Ran Kaspersky Virus Removal Tool in safe mode Tried connection in safe mode & networking enabled If you have any ideas, please let me know. I'm getting desperate...

    Read the article

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • Local, Multiple-Blog (ie Dashboard) Blogging Software as Alternative to Blogger [closed]

    - by Synetech inc.
    FOR RE-OPENING: I don’t see how it is “too localized”. Plenty of people like to run their own web-apps instead of relying on third-party services. If that were not true, then WordPress, phpBB, Apache, PHP, etc. would not be available for general use. As for “Internet audience at large”, I must have missed the part where it was a rule that you are only allowed to ask for help for things that applies to everyone else too; I thought you were allowed to ask for help. Besides, if someone knows of software that fulfills the question, then it is relevant to whomever would download it, and so is not only applicable to an “extraordinarily narrow situation”. (Besides, the reason that I was asking was because Google had announced that it was discontinuing FTP support for Blogger and so many people were affected—read NOT TOO LOCALIZED—and were trying to find alternatives.) Hi, I am trying to find software (for Windows, PHP, MySQL/SQLite/flat, free, open-source) to localize all of my software and service so that I can keep my files and host when needed from my own system instead of some remote computer. I’ve already selected things like web, FTP, and db servers. I’ve chosen forum and wiki software, as well as an RCS system. At this point, all I’m still looking for—actually, I still need to choose bug-tracking software, but besides that—is blogging software. I still use Blogger and am trying to find something that I can use to import my Blogger stuff and store on (and publish to) my home system. I have read of various blogging software including WordPress, MovableType, and TextPattern. The problem is that I am trying to find something that is like Blogger (which from what I can tell is not available on Google Code as open-source). What I specifically need is multiple-blog support. That is, multiple blogs ala the Blogger Dashboard, not multiple user accounts (although that is important as well). The closest thing that I have been able to find is using Wordpress categories to simulate multiple blogs, but that’s not really what I want. I want software that I can run locally that has a multi-blog dashboard like Blogger. Any ideas? Thanks a lot!

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

  • Ubuntu 12.04: apt-get "failed to fetch"; apt is trying to fetch via old static IP

    - by gabe
    Sample error: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 192.168.1.70:8118: Now this was working just fine until I changed the IP this morning. I have the server set to a static IP of 10.0.1.70 and for years it has been 192.168.1.70 - the IP apt-get is trying to use right now. I use privoxy and tor thus the 8118 port. Like I said it all worked until I changed the static IP from 192.168.1.70 to 10.0.1.70. I was forced to do so because of router issues. (Long and involved story, I didn't really want to change the IP because I know something like this would happen.) The setup for TOR/Privoxy requires that has you point Privoxy at TOR via 127.0.0.1:9050. Then point curl, etc to Privoxy via $HOME/.bashrc. Typically you would set the listen to IP for Privoxy to 127.0.0.1 but if you want it accessible to the rest of the LAN you set the IP to the server's LAN IP. Which I did a long time ago and was working fine until this morning. I have changed all instances of 192.168.1.70 to 10.0.1.70 in both /etc/privoxy/config and $HOME/.bashrc. What makes this really strange for me is that curl is working fine. I curl icanhazip.com and voila I get a new IP every 10 minutes or so. I curl CNN.com and I get the short but sweet permanently moved to www.cnn.com message I expect. Firefox works fine. Ping works fine. And I've tested all of this via Remote Desktop over my LAN. So the connection appears to be fine for everything except apt. I've also rebooted hoping that would clear 192.168.1.70 from apt. So the connection to the internet and DNS aren't an issue for these programs. And they are, as far as I can tell, using Privoxy/TOR just fine. The real irony here is that I've tried to open up Privoxy to go to Ubuntu's servers directly without going through TOR to speed up the downloads from Ubuntu (did this months ago). So somewhere that I have not been able to find, apt has stored the IP 192.168.1.70. And 192.168.1.70 is no longer valid. Thanks for the help

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • dnsmasq acts as the DHCP server for selected nodes overriding the existing DHCP server on the same LAN?

    - by user183394
    I am trying to set up a small "lab" at home. Like many modern homes, I have a regular DSL service which comes with a 2Wire 3600HGV router, which acts also as a DHCP server. Since I would like to PXE boot a few computers in my "lab" The 2Wire is inflexible to adjustments that I want to do I have used dnsmasq at work so I would like to use dnsmasq as the DHCP server for the few nodes in my "lab" if feasible. In the dnsmasq man page, there is the following: [...] -K, --dhcp-authoritative (IPv4 only) Should be set when dnsmasq is definitely the only DHCP server on a network. It changes the behaviour from strict RFC compliance so that DHCP requests on unknown leases from unknown hosts are not ignored. This allows new hosts to get a lease without a tedious timeout under all circumstances. It also allows dnsmasq to rebuild its lease database without each client needing to reacquire a lease, if the database is lost. [...] As far as I know, the ISC DHCP server can use the following to do what I would like to accomplish: authoritative; [...] subnet 192.168.1.0 netmask 255.255.255.0 { host nb0 { # only give DHCP information to this computer: hardware ethernet e8:9a:8f:17:70:42; fixed-address 192.168.1.10; option subnet-mask 255.255.255.0; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; # Non-essential DHCP options filename "/pxelinux.0"; } [...] But I much prefer dnsmasq's "all-in-one-ness". My question: do I have to couple the -K option with something else? As shown in the example above, the ISC DHCP server requires the mac addresses of managed nodes to be explicitly specified. Does dnsmasq have something similar? FYI, the machine on which I plan to run dnsmasq runs CentOS 6.3 64bit. It has a statically assigned IP address: 192.168.1.3.

    Read the article

  • Virtual Network Interface and NAT disables localhost access for MySQL and Apache

    - by Interarticle
    I'm running an Ubuntu Server 12.04, and recently I configured it to do NAT for my laptop. Since the server has only one NIC, I followed instructions online to create a virtual network device (eth0:0) that has a LAN IP address, then further configured iptables and UFW to allow internet sharing. However, just a few days ago, I discovered that one of the PHP pages hosted on the server failed for no apparent reason. A little digging revealed that the MySQL server started refusing connections from localhost. The same happened with a page (PhpMyAdmin) that was configured to be accessible only from localhost (in Apache2). The error, as shown by $mysql --protocol=tcp -u root -p looks like ERROR 1130 (HY000): Host '<host name of eth0>' is not allowed to connect to this MySQL server However, the funny thing is, I configured the mysql server to allow root access from localhost (only). Moreover, the mysql server listens only on 127.0.0.1:3306, as shown by: sudo netstat -npa | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1029/mysqld which means that the connection could have only come from 127.0.0.1 (Note that MySQL is working because I can still connect to it via unix domain sockets) In effect, it seems that all tcp connections originating from 127.0.0.1 to 127.0.0.1 appear to any local daemon to come from the eth0 IP address. Indeed, apache2 allowed me to access PhpMyAdmin after I added allow <eth0 IP address>. The following are my network configurations (redacted): /etc/hosts: 127.0.0.1 localhost 211.x.x.x <host name of eth0> <server name> #IPv6 Defaults follows .... /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 211.x.x.x netmask 255.255.255.0 gateway 211.x.x.x dns-nameservers 8.8.8.8 # dns-* options are implemented by the resolvconf package, if installed dns-search xxxxxxx.com hwaddress ether xx:xx:xx:xx:xx:xx auto eth0:0 iface eth0:0 inet static address 192.168.57.254 netmask 255.255.254.0 broadcast 192.168.57.255 network 192.168.57.0 /etc/ufw/sysctl.conf: #Uncommented the following lines net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 /etc/default/ufw: DEFAULT_FORWARD_POLICY="ACCEPT" #Changed DROP to ACCEPT /etc/init/internet-sharing.conf (upstart script I wrote), section pre-start script: iptables -A FORWARD -o eth0 -i eth0:0 -s 192.168.57.22 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE Note again that my problem here is that programs cannot access localhost tcp services, from the server itself, and that access is blocked because the services have access control allowing only 127.0.0.1. I have no problem connecting (as in TCP connections) to services via tcp, even if the services listen only on 127.0.0.1. I do NOT want to connect to the services from another computer.

    Read the article

  • Need help making an ODBC MySQL Connection

    - by Andy Moore
    Short Version: How do I connect from PowerShell to an ODBC 5.1 MySQL Driver? I can't seem to find any connection strings that accurately have a "Provider" field for this particular instance. (See bottom of this question for examples/errors) ===== Long Version: I'm not a server guy, and I've been handed the task of setting up PowerGadgets on our network. I have a MySQL server running on a Linux box, that is configured for remote access and has a user defined for remote access as well. On my windows desktop PC, I have PowerGadgets installed. I installed the MySQL ODBC 5.1 connector, and went to Control Panel Data Sources and set up a User DSN connection to the database. The connection, user, and pass seem to be correct because it lists the tables of the database in my windows control panel. Where I'm running into trouble is in 3 places in PowerGadgets: When selecting a data source, I can select "SQL Server". Inputting the servers IP address does not work and I can't get this option to work at all. When selecting a data source, I can select "OleDB". This screen has a wizard on it, that appears to populate all the correct information (including database table names!) for me. "Test Connection" runs great. But if I try to complete the wizard, I get the error "The .NET Framework data provider for OLEDB does not support the MS Ole DB provider for ODBC Drivers." When selecting a data source, I can select "ODBC". This screen does not have a wizard and I cannot figure out a "connection string" that works. Typically it will respond with the error "The field 'Provider' is missing". Googling ODBC connection strings doesn't reveal any examples with a "provider" field and have no idea what to put in here. The connection string (for #2) above contains "SQLOLEDB" as a provider, and upon inputting that value into this connection string I get the same connection error that #2 gets. I believe I can solve my problems by figuring out a connection string for #3 but don't know where to get started. (PowerGadgets also allows for PowerShell support but I believe I will run into the same problem there) == Here's my current PowerShell connection that doesn't work: invoke-sql -connection "Driver={MySQL ODBC 5.1 Driver};Initial Catalog=hq_live;Data Source=HQDB" -sql "Select * FROM accounts" Spits back the error: "Invoke-Sql : An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'. == Another string that doesn't work: invoke-sql -connection "Provider=MSDASQL.1;Persist Security Info=False;Data Source=HQDB;Initial Catalog=hq_live" -sql "select * from accounts" And the error: The .Net Framework Data Provider for OLEDB (System.Data.OleDb) does not support the Microsoft OLE DB Provider for ODBC Drivers (MSDASQL). Use the .Net Framework Data Provider for ODBC (System.Data.Odbc).

    Read the article

  • Initial Cisco ASA 5510 Config

    - by Brendan ODonnell
    Fair warning, I'm a but of a noob so please bear with me. I'm trying to set up a new ASA 5510. I have a pretty simple set up with one /24 on the inside NATed to a DHCP address on the outside. Everything on the inside works and I can ping the outside interface from external devices. No matter what I do I can't get anything internal to route across the border to the outside and back. To try and eliminate ACL issues as a possibility I added permit any any rules to the incoming access lists on the inside and outside interfaces. I'd appreciate any help I can get. Here's the sh run. : Saved : ASA Version 8.4(3) ! hostname gateway domain-name xxx.local enable password xxx encrypted passwd xxx encrypted names ! interface Ethernet0/0 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/1 nameif inside security-level 100 ip address 10.x.x.x 255.255.255.0 ! interface Ethernet0/2 shutdown no nameif no security-level no ip address ! interface Ethernet0/3 shutdown no nameif no security-level no ip address ! interface Management0/0 nameif management security-level 100 ip address 192.168.1.1 255.255.255.0 management-only ! ftp mode passive dns domain-lookup inside dns server-group DefaultDNS name-server 10.x.x.x domain-name xxx.local same-security-traffic permit inter-interface same-security-traffic permit intra-interface object network inside-network subnet 10.x.x.x 255.255.255.0 object-group protocol TCPUDP protocol-object udp protocol-object tcp access-list outside_access_in extended permit ip any any access-list inside_access_in extended permit ip any any pager lines 24 logging enable logging buffered informational logging asdm informational mtu management 1500 mtu inside 1500 mtu outside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 icmp permit any inside icmp permit any outside no asdm history enable arp timeout 14400 ! object network inside-network nat (any,outside) dynamic interface access-group inside_access_in in interface inside access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout pat-xlate 0:00:30 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy user-identity default-domain LOCAL aaa authentication ssh console LOCAL aaa authentication http console LOCAL http server enable http 192.168.1.0 255.255.255.0 management http 10.x.x.x 255.255.255.0 inside http authentication-certificate management http authentication-certificate inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart telnet timeout 5 ssh 192.168.1.0 255.255.255.0 management ssh 10.x.x.x 255.255.255.0 inside ssh timeout 5 ssh version 2 console timeout 0 dhcp-client client-id interface outside dhcpd address 192.168.1.2-192.168.1.254 management dhcpd enable management ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn username xxx password xxx encrypted ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options inspect icmp ! service-policy global_policy global prompt hostname context no call-home reporting anonymous Cryptochecksum:fe19874e18fe7107948eb0ada6240bc2 : end no asdm history enable

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • libvirt qemu/kvm migration problem

    - by Panda
    I am using kvm and libvirt on my Dell server. Now i am trying to migrate one virtual machine from a physical server to another. However, I failed everytime. In virsh on physicalServer1, I typed: virsh # migrate virtualmachine1 qemu+ssh://username@physicalServer2/system error: operation failed: migration to 'tcp:physicalServer2:49163' failed: migration failed Then I searched FAQ part on libvirt.org. It says: error: operation failed: migration to '...' failed: migration failed This is an error often encountered when trying to migrate with QEMU/KVM. This typically happens with plain migration, when the source VM cannot connect to the destination host. You will want to make sure your hosts are properly configured for migration (see the migration section of this FAQ) I managed to ssh physicalServer2 from a shell on virtualmachine1 so the above red part did not explain my failure. I also open ports on physicalServer2, iptables -L shows following information: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:49152:49215 Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer2: 2011-05-06 13:37:30.708: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name openjudge-test -uuid a8c704bc-a4f9-90db-3e57-40e60b00aac1 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/virtualmachine1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c -drive file=/media/nfs/virtualmachine1.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:16:36:8a:22 :a0,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:2 -vga cirrus -incoming tcp:0.0.0.0:49163 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/0 2011-05-06 13:37:30.915: shutting down The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer1 is empty. Both physical servers are using Ubuntu 11.04. The libvirt and kvm used are installed by apt-get. The libvirt version is 0.8.8.

    Read the article

  • Using nginx's proxy_redirect when the response location's domain varies

    - by Chalky
    I am making an web app using SoundCloud's API. Requesting an MP3 to stream involves two requests. I'll give an example. Firstly: http://api.soundcloud.com/tracks/59815100/stream This returns a 302 with a temporary link to the actual MP3 (which varies each time), for example: http://ec-media.soundcloud.com/xYZk0lr2TeQf.128.mp3?ff61182e3c2ecefa438cd02102d0e385713f0c1faf3b0339595667fd0907ea1074840971e6330e82d1d6e15dd660317b237a59b15dd687c7c4215ca64124f80381e8bb3cb5&AWSAccessKeyId=AKIAJ4IAZE5EOI7PA7VQ&Expires=1347621419&Signature=Usd%2BqsuO9wGyn5%2BrFjIQDSrZVRY%3D The issue I had was that I am attempting to load the MP3 via JavaScript's XMLHTTPRequest, and for security reasons the browser can't follow the 302, as ec-media.soundcloud.com does not set a header saying it is safe for the browser to access via XMLHTTPRequest. So instead of using the SoundCloud URL, I set up two locations in nginx, so the browser only interacts with the server my app is hosted on and no security errors come up: location /soundcloud/tracks/ { # rewrite URL to match api.soundcloud.com's URL structure rewrite \/soundcloud\/tracks\/(\d*) /tracks/$1/stream break; proxy_set_header Host api.soundcloud.com; proxy_pass http://api.soundcloud.com; # the 302 will redirect to /soundcloud/media instead of the original domain proxy_redirect http://ec-media.soundcloud.com /soundcloud/media; } location /soundcloud/media/ { rewrite \/soundcloud\/media\/(.*) /$1 break; proxy_set_header Host ec-media.soundcloud.com; proxy_pass http://ec-media.soundcloud.com; } So myserver/soundcloud/tracks/59815100 returns a 302 to /myserver/soundcloud/media/xYZk0lr2TeQf.128.mp3...etc, which then forwards the MP3 on. This works! However, I have hit a snag. Sometimes the 302 location is not ec-media.soundcloud.com, it's ak-media.soundcloud.com. There are possibly even more servers out there and presumably more could appear at any time. Is there any way I can handle an arbitrary 302 location without having to manually enter each possible variation? Or is it possible for nginx to handle the redirect and return the response of the second step? So myserver/soundcloud/tracks/59815100 follows the 302 behind the scenes and returns the MP3? The browser automatically follows the redirect, so I can't do anything with the initial response on the client side. I am new to nginx and in a bit over my head so apologies if I've missed something obvious, or it's beyond the scope of nginx. Thanks a lot for reading.

    Read the article

  • Double MySQL Running on Mountain Lion

    - by Norris
    After I've done a full restart, my Apache PHP server doesn't connect to Local MySQL ( connecting via 127.0.0.1 because localhost for some reason fails always ). So I did this today: ? ~ mysqladmin shutdown -u root -p Enter password: ? ~ mysqladmin shutdown -u root -p Enter password: mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists! Which basically means that I succeeded in shutting down mysql. But as soon as I did - Apache PHP successfully connect to MySQL and my local sites work without a hiccup until the next restart. Here are a few other details: (as you can tell - I've installed MySQL via brew) ? ~ sudo ps aux | grep mysql N 4774 0.0 0.0 2432768 620 s000 S+ 9:53AM 0:00.00 grep mysql N 4772 0.0 2.6 3030168 440688 ?? S 9:51AM 0:00.29 /usr/local/Cellar/mysql/5.6.13/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.6.13 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.6.13/lib/plugin --bind-address=127.0.0.1 --log-error=/usr/local/var/mysql/N.local.err --pid-file=/usr/local/var/mysql/N.local.pid N 4686 0.0 0.0 2433432 1000 ?? S 9:51AM 0:00.01 /bin/sh /usr/local/opt/mysql/bin/mysqld_safe --bind-address=127.0.0.1 N 4362 0.0 2.7 3120276 458728 ?? S 9:47AM 0:00.45 mysqld ? ~ lsof -i | grep mysql mysqld 4362 N 16u IPv6 0x76959e40691f9f93 0t0 TCP *:mysql (LISTEN) This is the weird thing: ? ~ killall -9 mysqld MySQL Is dead! Apache doesn't connect. Then, when I run: ? ~ sudo mysqladmin shutdown -u root -p Enter password: Apache is (again) able to successfully connect to MySQL. As far as I understand this means that I have two mysql servers setup and both of them are trying to start up at the same time, but I don't have the slightest idea on how to fix it. I've tried brew reinstalling but that didn't help. ? ~ which mysqladmin /usr/local/bin/mysqladmin ? ~ whence -p mysql /usr/local/bin/mysql

    Read the article

  • The server rejected the session-establishment request: WCF hosted on IIS

    - by Dave Hanna
    Background: I'm working on a project where we have about a dozen distinct WCF services implemented in an IIS application, communicating over net.tcp on the default port (808), using the Microsoft Net.Tcp Port Sharing Service. I recently added a self-test method to the base class of each of these services so that I could remotely hit the service and get back a status string verifying that it was in operation. We implement this app in a ladder of environments - Development, QA, UAT, and finally production. My problem: My test program, which instantiates a connection to each service in turn and invokes the self-test method, works fine on all the environments below production. We recently moved the app to production, and I'm getting a weird error that I can't explain: On the first of the services that I hit, I get back an exception: "The server at [URL] rejected the session-establishment request". All the other services respond fine. I initially thought there was something wrong with the particular service that was failing, but I tried rearranging the list of services into a different order, and it SEEMS to always be the first service that I hit that fails. (I say SEEMS because it think once in the early iterations of testing, I saw it happen on the second service that it hit. But I haven't been able to reproduce that.) I've looked at application startup delays, and that doesn't seem to be the problem, because I can come back and run the test again as soon as it finishes - a delay of only a minute or two - and get the same error. Also, in the lower level environments, there is a start up delay of probably 30 seconds to a minute, but the result still comes back as expected. I've tried accessing the services over http from INetManager, and I get intermittent failures on all the services - a particular service will return a yellow screen of death on on invocation, then come up with the expected link to the WSDL on the next one seconds later. I'm completely at a loss to explain this behavior, or how to resolve it. I've googled the error message, and not found anything helpful. It may be a configuration issue - the production servers are newly provisioned VM's, and we may not have the config exactly right (whereas all the lower level environments have been running this and other similar apps for some time), but I have not idea what to look for. I've looked at the properties of the app pool that the app is running on and compared it to the lower level environments without finding any differences. If somebody can point me in the right direction, you would have my undying gratitude.

    Read the article

  • DHCP server with multiple interfaces on ubuntu, destroys default gateway

    - by Henrik Kjus Alstad
    I use Ubuntu, and I have many interfaces. eth0, which is my internet connection, and it gets its info from a DHCP-server totally outisde of my control. I then have eth1,eth2,eth3 and eth4 which I have created a DHCP-server for.(ISC DHCP-Server) It seems to work, and I even get an IP-address from the foreign DHCP-server on the internet facing interface. However, for some reason it seems my gateway for eth0 became screwed after I installed my local DHCP-server for eth1-eth4. (I think so because I got an IP for eth0, and I can ping other stuff on the local network, but I cannot get access to the internet). My eth0-specific info in /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 10.0.1.1 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255 gateway 10.0.1.1 mtu 8192 auto eth2 iface eth2 inet static address 10.0.2.1 netmask 255.255.255.0 network 10.0.2.0 broadcast 10.0.2.255 gateway 10.0.2.1 mtu 8192 My /etc/default/isc-dhcp-server: INTERFACES="eth1 eth2 eth3 eth4" So why does my local DHCP-server fuck up the gateway for eth0, when I tell it not to listen to eth0? Anyone see the problem or what I can do to fix it? The problem seems indeed to be the gateways. "netstat -nr" gives: 0.0.0.0 --- 10.X.X.X ---- 0.0.0.0 --- UG 0 0 0 eth3 It should have been 0.0.0.0 129.2XX.X.X 0.0.0.0 UG 0 0 0 eth0 So for some reason, my local DHCP-server overrides the gateway I get from the network DHCP. Edit: dhcp.conf looks like this(I included info only for eth1 subnet): ddns-update-style none; not authoritative; subnet 10.0.1.0 netmask 255.255.255.0 { interface eth1; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; range 10.0.1.10 10.0.1.100; host camera1_1 { hardware ethernet 00:30:53:11:24:6E; fixed-address 10.0.1.10; } host camera2_1 { hardware ethernet 00:30:53:10:16:70; fixed-address 10.0.1.11; } } Also, it seems that the gateway is correctly set if I run "/etc/init.d/networking restart" in a terminal, but that's not helpful for me, I need the correct gateway to be set during startup, and i'd rather find the source of the problem

    Read the article

  • Setting up a very mixed Active Directory network to work with PowerShell Remote Administration

    - by erictheavg
    Summary: I want to be able to monitor the computers on my network, but don't need it to be automated. We're too small to purchase anything like MOM, but too big to do anything manually (~100 machines in two locations). I just keep running into issues, and was wondering if there's a master list of Group Policy settings I can distribute to my environment to get Remote Powershell working. Environment: Our AD network is pretty mixed. The end users have XP SP3, Win 7, and Win 7 x64. The servers include Win2k3 SP2, Win2k8, Win2k8 x64, Win2k8 R2, and Win2k8 R2 x64. Details: I'm trying to get it to work with Remote Powershell, but I run into errors like the following: Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (:) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionStateBroken Then I go to the computer (Win2k3 SP2 server) and run winrm quickconfig per the recommendations via google, and it says: Make these changes [y/n]? y WinRM has been updated to receive requests. WinRM service started. WSManFault Message = The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". Error number: -2144108526 0x80338012 The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". That's right. It tells me to remedy my winrm quickconfig failure by running winrm quickconfig. I don't want to band-aid this project one google search at a time. I'm sure there is a step-by-step tutorial out there on how to set up a network for powershell remote administration. Does anyone know of one? Books are acceptable. Thanks in advance! I didn't think my question would get this long.

    Read the article

  • Why do Ping and Dig provide different IP address than nslookup?

    - by user1032531
    When pinging my domain name which points to my home public IP from two different servers on my LAN, it shows them pinging different IP. Further investigation shows dig and nslookup providing different results. See below. A little history. My IP used to be 11.22.33.444 and is hosted by Comcast. I changed routers, and it somehow got changed to 55.66.77.888. I've since updated my 1and1 domain name to point to the 55.66.77.888. desktop is a basic server, runs the web server, and connects wirelessly to my LAN. laptop is a GUI and connected via CAT5. Both operate Centos6.4. My old router was a D-Link, and used their "Virtual Server" feature to pass port 80 to desktop. My new router is a Linksys, and I use their "Port Forwarding" feature to pass port 80 to desktop (however, I haven't gotten this part working yet). What is going on??? Why the different IPs? Obviously, it most somehow be stored on the server, but why does the actual machine even know the public IP since it is on a LAN? How do I purge the old IP? [root@desktop etc]# dig +short myDomain.com 11.22.33.444 [root@desktop etc]# nslookup www.myDomain.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@desktop etc]# dig myDomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> myDomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13822 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myDomain.com. IN A ;; ANSWER SECTION: myDomain.com. 16031 IN A 11.22.33.444 ;; Query time: 21 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Oct 21 04:36:52 2013 ;; MSG SIZE rcvd: 44 [root@desktop etc]# [root@laptop ~]# dig +short myDomain.com 55.66.77.888 [root@laptop ~]# nslookup www.myDomain.com Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@laptop ~]#

    Read the article

  • How to grow to be global sysadmin of an organization?

    - by user64729
    Bit of a non-technical question but I have seen questions of the career development type on here before so hopefully it is fine. I work for a fast growing but still small organization (~65 employees). I have been their external sysadmin for a while now, looking after hosted Linux servers and infrastructure. In the past 12 months I have been transforming into the internal sysadmin for our office too. I'm currently studying Cisco CCNA to cover the demands of being an internal sysadmin and looking after the office LAN, routers, switches and VPNs. Now they want me to look after the global sysadmin function of the organization as a whole. The organization has 3 offices in total, 2 in the UK and 1 in the US. I work in one of the UK offices. The other offices are primarily Windows desktops with AD domain shops. My office is primarily a Linux shop with a file-server and NFS/NIS (no AD domain for the Windows desktops yet but it's in the works). Each other office has a sysadmin which in theory I am supposed to supervise but in reality each is independent. I have a very competent junior sysadmin working with me who shares the day-to-day tasks and does some of the longer term projects with my supervision. My boss has asked me how to grow from being the external sysadmin to the global sysadmin. I am to ponder this and then report back to him on how to achieve this. My current thoughts are: Management training or professional development - eg. reading books such as "Influencer" and "7 Habits". Also I feel I should take steps to improving communication skills since a senior person is expected to talk and speak out more often. Learn more about Windows and Active Directory - I'm an LPI-certified guy and have a lot of experience in Linux (Ubuntu or desktop, Debian/Ubuntu as server). Since the other offices are mainly Windows-domains it makes sense to skill-up in that area so I can understand what the other admins are talking about. Talk to previous colleagues who have are are in this role already - to try and get the benefit of their experience. Produce an "IT Roadmap" or similar that maps out where we want the organization to be and when, plotted out over the next couple of years with regards to internal and external infrastructure. I have produced a "Security roadmap" already which does cover some of these things. I guess this can summed up as "thinking more strategically"? I'd appreciate comments from anyone who has been through a similar situation, thanks.

    Read the article

  • Intermittent internet access on a flat network - Router is connected

    - by Naveed
    I’m looking for some help with network settings. I’ve just started a new job (non-IT!) and we have problems with our office network. I’m the most IT literate in the organisation (15 permanent employees) and so have been dealing with IT issues. Our main bit of software is web-based so we need constant web access but it sometimes goes down for between 20 minutes and 3 hours despite everything seemingly working fine. It’s a flat network with wireless APs, BT Business Broadband 8Mbit connection and that’s about it. We have no servers and no standard settings and staff are encouraged to bring in their own laptops and connect! The network basically exists to provide internet access and that’s it. We also have students accessing the wireless (and I know there’s a whole list of access and content issues etc but right now we just need internet access stabilised). This is what we have: Building 1 Cisco SLM-224P 24-port PoE 10/100 switch with 2 gigabit ports 3 x ZyXEL NWA-3160 wireless APs Samsung OfficeServ 7100 phone server which borrows the building’s wiring Building 2 Netgear GS605-UK 5-port 10/100/1000 switch 1 x ZyXEL NWA-3160 wireless AP 1 x BT Business Hub – 2wire BT2700hgv – is the DHCP server We have 2 link cables between the buildings. One connects the two switches on a gigabit port. The second (oddly) connects the switch in building 2 to the OfficeServ server in building 1. When the internet goes down I can still access the router through a wireless connection. I can also ping websites and get a response. Firefox just says “Cannot connect” etc. The system then heals itself when it feels like it. (Sorry if this is asking too much but) These are my immediate questions… Why would browser-based internet go down? I don’t know enough about protocols etc but I can try to standardise settings. The WAPs have a DNS server setting and I don’t know whether it should be “None” or “From DHCP”. What should be the DHCP server? The router or the Cisco switch? Or something else?! Would there be any problem in connecting the second link from switch to switch? Is that good practice? Is it worth swapping the Netgear GS605 with either a Cisco SG200-08 or Netgear GS108T-200? Is it worth upgrading the router to, for instance, a Cisco RV042G Dual Gigabit router which would also act as a switch? Or is it better to have a separate router and switch in Building 2?

    Read the article

  • iftop Shows Lots of Mysterious Connections - Not Showing in netstat

    - by HOLOGRAPHICpizza
    I've just stopped all pretty much all services except sshd on my server (Ubuntu Server 10.04), and when I run iftop I get output that looks like this: 12.5Kb 25.0Kb 37.5Kb 50.0Kb 62.5Kb mqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqq flash.gateway.2wire.net:ssh <=> 172.16.1.151:60405 1.75Kb 1.54Kb 2.22Kb flash.gateway.2wire.net:21095 <=> 69.127.29.20:32582 536b 107b 27b flash.gateway.2wire.net:21095 <=> 190.164.122.134:13557 0b 105b 26b flash.gateway.2wire.net:21095 <=> 79.165.212.195:45138 0b 105b 26b flash.gateway.2wire.net:21095 <=> 151.42.15.151:9031 0b 72b 18b flash.gateway.2wire.net:21095 <=> 88.185.120.179:51413 0b 0b 49b flash.gateway.2wire.net:21095 <=> 178.120.152.97:25924 0b 0b 29b flash.gateway.2wire.net:21095 <=> 109.110.217.77:27868 0b 0b 26b flash.gateway.2wire.net:21095 <=> 84.13.201.90:16509 0b 0b 26b flash.gateway.2wire.net:21095 <=> 171.7.125.224:11777 0b 0b 26b flash.gateway.2wire.net:21095 <=> 115.177.164.170:21360 0b 0b 26b flash.gateway.2wire.net:21095 <=> 50.88.126.18:25540 0b 0b 25b flash.gateway.2wire.net:21095 <=> 223.206.230.163:13431 0b 0b 25b flash.gateway.2wire.net:21095 <=> 78.144.187.26:24515 0b 0b 25b flash.gateway.2wire.net:21095 <=> 83.20.61.211:27572 0b 0b 25b flash.gateway.2wire.net:21095 <=> 82.134.151.42:18448 0b 0b 18b flash.gateway.2wire.net:21095 <=> 126.117.95.247:25316 0b 0b 18b flash.gateway.2wire.net:21095 <=> 116.202.65.230:9044 0b 0b 18b flash.gateway.2wire.net:21095 <=> 88.120.63.205:51413 0b 0b 17b qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cumm: 61.6KB peak: 8.00Kb rates: 1.59Kb 1.38Kb 2.04Kb RX: 18.4KB 1.64Kb 696b 549b 640b TOTAL: 80.0KB 9.64Kb 2.27Kb 1.92Kb 2.66Kb This is the first part (not the unix socket part) of the output of netstat -a: Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:55677 *:* LISTEN tcp 0 0 flash.gateway.2wire:ssh 172.16.1.151:60405 ESTABLISHED tcp 0 48 flash.gateway.2wire:ssh 172.16.1.151:60661 ESTABLISHED tcp6 0 0 [::]:ssh [::]:* LISTEN udp 0 0 *:37790 *:* What could all those strange connections on port 21095 be? And why would they not show up in netstat?? Any advice would be greatly appreciated.

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >