Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 436/590 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • Asterisk terminating outbound call when picked up, sends 'BYE' message

    - by vo
    I'm running Asterisk 1.6.1.10 / FreePBX 2.5.2.2 and I've got an outbound trunk setup. Everything use to work fine until recently (perhaps due to upgrade to FC12 or other things I'm not sure). Anyway the setup does not appear to have issues registering and setting up the call, RTP packets go both ways and you can hear the ringing from the other side. However it appears that when the call is picked up or thereabouts, the incoming RTP packets cease. Upon closer inspection with Wireshark, there are these particular packets that seem to be the cause: trunk->asterisk SIP/SD Status: 200 OK, with session description asterisk->trunk SIP Request: ACK sip:<phone>@trunk:6889 asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 [..about a dozzen RTP packets in/outbound..] trunk->asterisk SIP Status: 200 OK, CSeq: 104 Bye [..outbound RTP continues, phone is silent..] Then the inbound RTP packets cease, however the asterisk logs dont show any activity at this point. The last entry reads 'SIP/ is answered SIP/'. Then when you hangup the extension, you get asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 trunk->asterisk SIP Status: 481 Call Leg/Transaction does not exist My trunk peer settings in FreePBX are: username=<user> fromuser=<user> canreinvite=no type=friend secret=<pass> qualify=no [qualify yes produces 401/forbidden messages] nat=yes insecure=very host=<sip trunk gateway> fromdomain=<sip trunk gateway> disallow=all context=from-pstn allow=ulaw dtmfmode=inband Under sip_general_custom.conf i have stunaddr=stun.xten.com externrefresh=120 localnet=192.168.1.1/255.255.255.0 nat=yes Whats causing Asterisk to prematurely end the call and still think the call is in progress? I have no idea where to look next.

    Read the article

  • ASP.NET sending email through exchange problem

    - by Solmead
    I have an exchange 2010 server running on Windows 2008 R2, I also have a remote webserver running Windows 2003 with multiple sites on it (all asp.net mvc 2 sites). I setup a Transport in exchange and all the websites on my remote web server can send email no problem to anyone in the exchange server and to any external domain. Now for my problem. I am having issues with that webserver, so I moved one of the websites to run on my exchange server, it runs well (low hit website) except that email doesn't work from that site. I tried changing the Transport in exchange to add the IP address of the local machine and the 127.0.0.1 addresses and it still isn't sending any email. Any ideas on how to get this working? The remote websites can still send email no problem, the version of the site that I had to move on the remote server can still email, but on the exchange server for that website email does not send. I would guess it is a Transport issue, since it is running on the same server a firewall shouldn't be the issue. I changed the smtp setting in web.config to localhost, and now I do receive email to my account on the exchange server, but I do not receive any emails on outside addresses. To add more description, this is a custom developed asp.net mvc 2 website. And no errors were being generated in the code when sending the email in either case.

    Read the article

  • Strange windows boot problem - My computer won't boot from hard disk

    - by user29779
    Hello, Until yesterday, I had windows XP installed on my computer. After installing the OS, and setting the BIOS to boot from the hard-disk first, I discovered a strange problem - the OS wasn't booted and the only way I could get it to load was to insert a windows installation disk, enter repair mode, do "fixboot" and restart. The problem only occured when the computer was booted after shut-down. If I only restarted it, everything worked fine. Yesterday, I upgraded my XP to win7 and the problem persists. I tried the same "trick" I did with XP to get it to load, by entering repair and doing "bootrec /fixmbr" and "bootrec /fixboot" but that didn't work (and when I run "scanos" it didn't find any windows installations). Eventually, I got it to load by changing the settings in the BIOS to boot from CD first and HD second, removing the installation disk from the drive, letting it fail to boot from CD and then re-insert the disk. Anyone has any idea what may be the cause or how can I investigate this issue? Thanks! Marina

    Read the article

  • Using gentoo, how does one stick -9999 ebuild to a specific svn revision?

    - by hurikhan77
    As an example given the django-9999 ebuild, to match the developers environment I need to checkout R12120 from trunk. Installing Django manually is not option due to package management reasons. But there is also no ebuild in portage for 1.2 beta versions. So I did the following: ESVN_OPTIONS="-r12120" emerge -1a django Which installed the required revision from svn. But this is cumbersome in a way. Is there some way to define this statically per ebuild, eg something like: DJANGO_SVN_REV="12120" in make.conf. This would be much cleaner in my eyes. Because next time I need to rebuild django for whatever reason, I need to remember: "Oh I wanted this to stick to a specific revision" and next question will be "err, f&!#$?%, what was it again?" What's the best way to go here? Keep in mind: Manually installing packages without package manager knowledge is no option Working around with manual emerge variable prefixing is no option Setting up a /etc/portage/package.env would be a way to go (as described here) but that seems pretty unsupported and kludgy to me and thus unpreferable Modifying make.conf would be a way to go Keeping the ebuild in an overlay would be an option

    Read the article

  • IIS 7.5 FTP Service crashes after installation of Advanced Logging 1.0 Module

    - by Jeremy
    I've recently been tasked with setting up two new productions servers for an ASP.Net application. The servers sit behind a F5 Load Balancer, which in turn forwards the end users IP address forward via the standard X_Forwarded_For HTTP Header. All of the reading that I have done suggests that I need to install the IIS Advanced Logging Module in order to take advantage of the X_Forwarded_For HTTP Header. Some quick background: Both of the web servers are Windows 2008 R2 Standard (x64), with IIS 7.5 installed and configured. The FTP Role has also been installed, configured and is operational. The Issue After installing the IIS Advanced Logging module via the Web Platform Installer, I noticed the following Error in the Event Viewer: The FTP Service encountered an error trying to read configuration data from file \?\C:\Windows\system32\inetsrv\config\applicationHost.config, line number 374. The error message is: Unrecognized element 'advancedLogging' Trying to connect over FTP to either of the web servers results in a 530. I've spent 2 hours scouring Google trying to find a solution, short of uninstalling the Advanced Logging Module. As far as I can tell, there is no way to turn off Advanced Logging on a site per site basis. Help would be appreciated.

    Read the article

  • Cannot find "IIS APPPOOL\{application pool name}" user account in Windows Server 2008

    - by MacGyver
    Normally when setting up IIS 7, I'm used to allowing permissions to user IIS APPPOOL\{application pool name} on the root folder of my web application(s). I also give permissions to IUSR (or the IIS_IUSRS user group. (Note, in Windows Server 2008, I found that IUSR isn't in that group by default, so I added it). In Windows Server 2008, I cannot find user IIS APPPOOL\{application pool name} under Security under the Windows Folder Properties. I'm using Windows Authentication in ASP.NET. I'm receiving a 401.1 on the page in Internet Explorer 8 after getting the authentication prompt. Mozilla Firefox also gave me a Windows authentication prompt, and got me into the site fine. Same with Google Chrome. How can I solve this one? HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. Specific page information: Module: WindowsAuthenticationModule Notification: AuthenticateRequest Handler: PageHandlerFactory-ISAPI-4.0_32bit Error Code: 0x8009030e Requested URL: http://.....aspx Physical Path: C:\.........aspx Logon Method: Not yet determined Logon User: Not yet determined

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • Weird fluctuating time on a XEN linux guest

    - by Vin-G
    I have a weird problem with some servers here at work. We have a few XEN guests who's current time fluctuates. # date;date;date;date;date;date;date Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 As seen above, the time fluctuates between 16:00:48 and 16:00:40, which is problematic for us since computing for time differences in some of our scripts becomes inaccurate (ex. what should be a few ms differences becomes some few second differences, and even sometimes, negative differences). The problematic servers are linux guests on a XEN host. The time fluctuates on the guest systems, but it is okay in the host itself. I've ruled out ntpd since this happens irregardless of whether ntpd is running or not on the guest systems. Guest is on full virtualisation. The time on both the host and the guest does match except that the time in the guest fluctuates at about a few seconds from the host's time, and the host time does not fluctuate. /proc/sys/xen/independent_wallclock is 0 in the host and does not exist in the guest. Ntpd service was stopped and disabled. Setting independent_wallclock to 1 in the host has no effect (that is, time still fluctuates in the guest). Though I was not able to restart the guest as it is a production server. Might be able to do that over the weekend. Any ideas on what to check and how to resolve this problem?

    Read the article

  • One Comcast Business Gateway, One Router, Two Web Servers

    - by Kevin Scheidt
    I have a Comcast business account with a router and a web server (info) attached. behind the router there are multiple computers and a second web server (info) which also serves as a file server. (info) has two nics in it. One direct to comcast and one connected to the router. It needs to serve the world it's websites. It needs however, to also be able to see all the internal computers and (com)'s served files. With just 1 nic (the one connected to the router, not comcast), (info) works fine but no one outside can see it. (com) services port 80 and (info) needs to handle port 80 as well. I have two domain names registered, and 5 static ip's from comcast. right now h t t p: / /www.graceamazing.com handled by (com) works fine and h t t p: / /www.graceamazing.com:1307 handled by (info) works fine. but as soon as I enable the 2nd nic in (info) h t t p: / /www.graceamazing.info runs extremely slow (Horribly slow). however, h t t p: / /www.graceamazing.com:1307 and .com work fine. (com) has an ip address via the router 70.89.233.41 (info) has a ip addy of 70.89.233.46 via comcast (2nd nic) and a internal ip of 192.168.x.100 via static behind the router. Any suggestions or changes to make that will make h t t p: / /www.graceamazing.info perform with the same speed it has when going through h t t p: / /graceamazing.com:1307 is there a setting I should check / could have misssed?

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • PFSense VPN Routing

    - by SvrGuy
    We use PFSense firewalls at three installations with the following LAN networks: 1.) Datacenter #1: 10.0.0.0/16 2.) Datacenter #2: 10.1.0.0/16 3.) HQ: 10.2.0.0/16 All of these locations are linked via an IPSEC tunnel that works properly. Hosts in any of the above networks can communicate with hosts in any other of the above networks. Now, for our laptops etc. we established a road warrior network 10.3.0.0/16 and have implemented OpenVPN to link the laptops etc. to Datacenter #1. This works great too, so our laptops can connect and communicate with any host in Datacenter #1 (anything on 10.0.0.0/16) The problem is the laptops can't communicate with any hosts that Datacenter #1 can reach by its IPSEC tunnel to Datacenter #2 (and/or the HQ for that matter). Does anyone know what to do configuration wise on the PFSense box in Datacenter #1 to configure to route packets received on the OpenVPN tunnel to Datacenter #2 over the IPSEC tunnel? It could be a setting on the OpenVPN or some sort of static route or some such. Any ideas?

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ >> It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

  • VirtualBox problems writing to shared folders (Guest Additions installed)

    - by vincent
    I am trying to setup a shared folder from the host (ubuntu 10.10) to mount on a virtualized CentOS 5.5 with Guest Additions (4.0.0) installed (Guest addition features are working ie. seamless mode etc.). I am able to successfully mount the share with: mount -t vboxsf -o rw,exec,uid=48,gid=48 sf_html /var/www/html/ (uid and guid belong to the apache user/group) the only problem is that once mounted and I try to write/create directories and files I get the following: mkdir: cannot create directory `/var/www/html/test': Protocol error I am using the proprietary version of VirtualBox version 4.0.0 r69151. Has anyone had the same problem and been able to fix it or has any idea how to potentially fix this? Another question, the reason for setting this up is this. Our production servers are on CentOS 5.5 however I am a great fan of Ubuntu and would like to develop on Ubuntu rather than CentOS. However in order to stay as close to the production environment I would like to virtualize CentOS to use a web server and use the shared folder as web root. Anyone know whether this isn't a good idea? Has anyone successfully been able to set this up? Thanks guys, your help is always much appreciated and if you need any more information please let me know.

    Read the article

  • HAProxy "503 Service Unavailable" for webserver running on a KVM virtual machine

    - by Menda
    I'm setting up a server with KVM (IP 192.168.0.100) and I have created inside of it one virtual machine using network bridging at 192.168.0.194. This virtual machine has an nginx instance running, which I can access from the server or from any computer computer in the internal network just typing in the browser http://192.168.0.194. However, I try configure HAProxy in the same server that hosts KVM and looking the status page of HAProxy it always shows the virtual machine as "DOWN". If I try from the server http://localhost, it should be the same than if I go to http://192.168.0.194. My goal is to build a reverse proxy, but I tried this little example and won't work. What am I doing bad? This is my config file in the server: # /etc/haproxy/haproxy.cfg global maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen ServerStatus *:8081 mode http stats enable stats auth haproxy:haproxy listen Server *:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server mv1 192.168.0.194:80 cookie A check Thanks.

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • BackupPC - why does it use rsync --sender --server ... ?

    - by Jakobud
    I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values. I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors: full backup started for directory /www Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Xfer PIDs are now 30395 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password. In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get: debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0 Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for? Looking at the BackupPC config I have this: RsyncClientPath = /usr/bin/rsync RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+ And for the arguments, I have the following listed: --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

    Read the article

  • How to run the CPU at something like 75%?

    - by Tobias Kienzler
    My notebook is too old for me to invest into a new fan (it will simply be replaced by a new one when the final heat death occurs), but when it runs on full speed I feel like sitting in front of a vacuum cleaner with integrated cooking... I'm currently using NHC, the Max Battery mode of which let's the CPU run at 50% (~800 MHz). That's fine for most applications, and both temperature and noise remain low. However, on some occasions I need a bit more speed, more around 75% maybe. Can I set the power saving settings somehow so that the CPU won't surpass 75% of it's capability so that an acceptable compromise between power and noise is achieved? I can't set the CPU frequency in the BIOS and since on rare occasions I'd like to be able to switch to 100% without much hassling, hardware solutions like setting jumpers are no option. This answer to a similar (linux!) question mentions NHC should be able to offer these options, but for me they are all greyed out: The notebook is an Asus Z9200K, I guess NHC doesn't support its chipset enough for these advanced options.

    Read the article

  • LDAP Authentication fails with 500 or 401 depending on bind for Apache2

    - by Erik
    I'm setting up LDAP authentication for our Subversion repository hosted through Apache on a RHEL 5 system. I run into two different issues when I try to authenticate against Active Directory. <Location /svn/> Dav svn SvnParentPath /srv/subversion SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthBasicProvider ldap AuthLDAPBindDN "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" AuthLDAPBindPassword "mypass" AuthLDAPUrl "ldap://my.example.com:389/ou=User Accounts,dc=my,dc=example,dc=com?sAMAccountName?sub?(objectClass=user)" NONE Require valid-user </Location> If I use the above configuration it continually prompts me with the Basic prompt and I have to eventually select Cancel, which returns a 401 (Authorization Required). If I comment out the bind parts it returns 500 (Internal Server Error), griping that authentication failed: [Mon Nov 02 12:00:00 2009] [warn] [client x.x.x.x] [10744] auth_ldap authenticate: user myuser authentication failed; URI /svn [ldap_search_ext_s() for user failed][Operations error] When I perform the bind using ldapsearch and filter for a simple attribute it returns correctly: ldapsearch -h my.example.com -p 389 -D "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" -b "ou=User Accounts,dc=my,dc=example,dc=com" -w - "&(objectClass=user)(cn=myuser)" sAMAccountName Unfortunately I have no control or insight into the AD part of the system, only the RHEL server. Does anyone know what the hang up is here?

    Read the article

  • PhpMyAdmin Missing parameter:

    - by Ali
    Everything was working fine until this moment I want to create a new database and I'm receiving this error db_create.php: Missing parameter: new_db (FAQ 2.8) Also when I was trying to export my database I also receive the following error export.php: Missing parameter: what (FAQ 2.8) export.php: Missing parameter: export_type (FAQ 2.8) When I looked it FAQ 2.8 from the suggested link in PHPMYADMIN 2.8 I get "Missing parameters" errors, what can I do? Here are a few points to check: In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7. Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/bug.php?id=31134. If you are using Hardened PHP with the ini directive varfilter.max_request_variables set to the default (200) or another low value, you could get this error if your table has a high number of columns. Adjust this setting accordingly. (Thanks to Klaus Dorninger for the hint). In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;". If you are using Hardened-PHP, you might want to increase request limits. The directory specified in the php.ini directive session.save_path does not exist or is read-only. I did tried with php.ini to make sure that I've session.save_path = "/tmp" I'm using Mac Xserver and running with MAMP I did tried everything to restart my server and nothing help. Please if anyone help me to give me some suggestion. Apologize if I post in a wrong place.

    Read the article

  • Starting airplay from command line, to send output of 'Say' Mac OS X command to airplay

    - by Fabien
    Ok, Sunday question :) Trying to make a little joke... 1) if you open a terminal, and type "say -a ?", Mac OS X will give you the list of devices it can send spoken words to. On mine, it says: 39 AirPlay 47 Built-in Output 2) I have a Denon airplay-ready received in my living room and I'm trying to send spoken words to my wife downstairs... I can send music without any problem using iTunes so, from an infrastructure standpoint, I'm all set. 3) I want my computer to say (out of the blue) "Honey, why don't you bring me a cup of coffee". I can make it say that locally on my internal laptop speakers, but I can't seem to send that to device 39 successfully. I am suspecting that there are a few other things that need to be setup before it works, i.e. setting up airplay output to "denon", maybe opening a channel and reserving it. I don't know. Has anyone played with this? Is there a way to setup airplay from the command line? That would be awesome :)

    Read the article

  • VPN issue: SSTP Service service started and then stopped

    - by Ampersand
    When I was trying to set up a VPN connect on my laptop running Windows 7 Ultimate, I got this error: Network Connections Cannot load the Remote Access Connection Manager service. Error 711: The operation could not finish because it could not start the Remote Access Connection Manager service in time. Please try the operation again. I traced through some service dependencies and discovered that Secure Socket Tunneling Protocol Service was set to Manual. However, when I try to manually start the service, I get: Services The Secure Socket Tunneling Protocol Service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Setting all the services involved to Automatic did not help. SSTP just showed Automatic and Stopped in the Services panel. I found a solution that involved booting in Safe Mode and deleting the contents of C:\Windows\System32\LogFiles\WMI\RtBackup. This solution worked, and I could set up a vpn connection, but only until I rebooted again. TL;DR I'm looking for a way to permanently enable Secure Socket Tunneling Protocol Service and other vpn-related services permanently so I don't have to reboot into safe mode and delete files every time I need to connect to a vpn.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >