Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 12/232 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • IIS 7.5 Siteminder Does not protect ASP.net MVC requests

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • Microsoft FTP fails to connect after the client requests the list of features (FEAT)

    - by Max
    This is a really weird problem. The first few times I try to connect Filezilla just hangs on the line 211-Extended features supported: for a while before coming up and saying Error: Could not connect to server. The filezilla log below: Command: PASS *********** Response: 230 User logged in. Command: FEAT Response: 211-Extended features supported: Error: Could not connect to server The weird thing is if I keep trying to connect eventually it just works and connects fine. After Filezilla knows which features the server supports it stops asking for a while which enables you to connect first time until Filezilla decides it wants to double check the features list again. I'm at a loss on how to debug this. Has anyone experienced similar?

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • requests are taken to the wrong directory

    - by gidireich
    Hi, I use xampp on windows 7 64, to test locally my php web application I use virtual hosts to have access to my code from different domain names. Now, I created a new version of the code and want to access both versions using different domain names. I added a new virtual host newversion.mysite.local to httpd-vhosts.conf and directed it to the directory of the new version. Also updated the windows hosts file with the line 127.0.0.4 newversion.mysite.local A strange thing happens: when browsing to newversion.mysite.local I'm being taken to the old version, which is located at a different location. How the hell this happens? Please help me with ideas Thank you Gidi

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • Need troubleshooting advice for intermittent dns problems with requests on isp nameservers

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. My website is on a server running linux with Plesk. My dns records are configured with plesk (so my server is its own dns master). Domain name is registered with godaddy. I'm not real knowledgeable about dns, so I don't really know how to begin with troubleshooting. I've started learning to use dig, but while I can read the manpage to learn the syntax, I don't really know what questions to ask. Since the problem is intermittent I haven't been able to really catalog many symptoms. Symptoms I have observed: Certain people repeatedly reported intermittent problems connecting to my website. This was only from certain networks. (Ex: One guy could connect reliably from his office but not his home.) Sometimes I notice my browser taking a long time looking up the hostname for my site (Firefox shows a message in the status bar at the bottom). For me this is in the ten second range. ssh connections from anywhere to my server take a long time to connect but then seem to work fine once connected. So hopefully the folks on serverfault can point me to a good beginner tutorial for understanding dns, and suggest troubleshooting questions to ask next time one of my users reports connectivity problems.

    Read the article

  • HTTP Requests Fails for WordPress sites after upgrading from PHP 5.2 to PHP 5.3

    - by anjanesh
    I've upgraded from Ubuntu 9.10 to 10.04 beta 1 and PHP got upgraded from 5.2 to 5.3 Now all my WordPress & Magento sites arent working. I tried retrieving the URL headers from the command line, but HTTP request fails. Utilizing get_headers in a small script, PHP Warning: get_headers(http://local.vhosts1.com): failed to open stream: HTTP request failed! in get_headers.php on line 12 But HTTP request fails only for WordPress and Magento based sites - not custom written ones. Does this probably have to do with some htaccess directive ?

    Read the article

  • Website requests not reaching IIS?

    - by pete the pagan-gerbil
    To start off with a confession, I am not a server admin - just a developer tasked with getting to the root of a problem. Please be gentle! I have an intranet ASP.NET website running in IIS on a virtual machine. The website is not accessed very often (the last IIS log file was modified nearly six months ago). Both the IP address and Host header value are now failing to return the website, and the IIS log still doesn't show any more recent activity. The virtual machine was moved to a different physical location a few months ago, and the IP address for it has changed. Could this be what has broken access to the site? What else should I be checking to solve this? I don't have totally unrestricted access to the building's network settings, structures, etc. I would be grateful for any advice, even if I can't use it myself it'll improve my knowledge of what's going on behind the scenes!

    Read the article

  • Selecting which IP address to use for outgoing requests from behind a NAT

    - by iamrohitbanga
    Our organization has several external IP addresses. I am behind 2 layers of NAT and the servers choose which IP address to route my traffic to. Can I specify which IP address to use when finally leaving the organizations network. I know that source routing can be done in IPv4 by adding some options in the header. But can I configure my PC to add these options automatically. I have both a Windows and a Linux Machine.

    Read the article

  • How can I get FreeNAS to respond to libvirt shutdown requests

    - by ptomli
    I have a KVM VM of FreeNAS 0.7.1 Shere (revision 5127) running on Ubuntu Server 10.04 and I'm unable to convince the VM to shutdown from the host virsh shutdown freenas I would expect this to send some ACPI? trigger to the VM and FreeNAS then do what it's told. I'm not a FreeBSD fundi so I don't really know what packages or processes to poke to get this running. I have tried to convince powerd to run, but the VM cpus don't have the required freq entry Sysctl HW $ sysctl hw hw.machine: amd64 hw.model: QEMU Virtual CPU version 0.12.3 hw.ncpu: 1 hw.byteorder: 1234 hw.physmem: 523116544 hw.usermem: 463806464 hw.pagesize: 4096 hw.floatingpoint: 1 hw.machine_arch: amd64 hw.realmem: 536850432 hw.aac.iosize_max: 65536 hw.amr.force_sg32: 0 hw.an.an_cache_iponly: 1 hw.an.an_cache_mcastonly: 0 hw.an.an_cache_mode: dbm hw.an.an_dump: off hw.ata.to: 15 hw.ata.wc: 1 hw.ata.atapi_dma: 1 hw.ata.ata_dma_check_80pin: 1 hw.ata.ata_dma: 1 hw.ath.txbuf: 200 hw.ath.rxbuf: 40 hw.ath.regdomain: 0 hw.ath.countrycode: 0 hw.ath.xchanmode: 1 hw.ath.outdoor: 1 hw.ath.calibrate: 30 hw.ath.hal.swba_backoff: 0 hw.ath.hal.sw_brt: 10 hw.ath.hal.dma_brt: 2 hw.bce.msi_enable: 1 hw.bce.tso_enable: 1 hw.bge.allow_asf: 0 hw.cardbus.cis_debug: 0 hw.cardbus.debug: 0 hw.cs.recv_delay: 570 hw.cs.ignore_checksum_failure: 0 hw.cs.debug: 0 hw.cxgb.snd_queue_len: 50 hw.cxgb.use_16k_clusters: 1 hw.cxgb.force_fw_update: 0 hw.cxgb.singleq: 0 hw.cxgb.ofld_disable: 0 hw.cxgb.msi_allowed: 2 hw.cxgb.txq_mr_size: 1024 hw.cxgb.sleep_ticks: 1 hw.cxgb.tx_coalesce: 0 hw.firewire.hold_count: 3 hw.firewire.try_bmr: 1 hw.firewire.fwmem.speed: 2 hw.firewire.fwmem.eui64_lo: 0 hw.firewire.fwmem.eui64_hi: 0 hw.firewire.phydma_enable: 1 hw.firewire.nocyclemaster: 0 hw.firewire.fwe.rx_queue_len: 128 hw.firewire.fwe.tx_speed: 2 hw.firewire.fwe.stream_ch: 1 hw.firewire.fwip.rx_queue_len: 128 hw.firewire.sbp.tags: 0 hw.firewire.sbp.use_doorbell: 0 hw.firewire.sbp.scan_delay: 500 hw.firewire.sbp.login_delay: 1000 hw.firewire.sbp.exclusive_login: 1 hw.firewire.sbp.max_speed: -1 hw.firewire.sbp.auto_login: 1 hw.mfi.max_cmds: 128 hw.mfi.event_class: 0 hw.mfi.event_locale: 65535 hw.pccard.cis_debug: 0 hw.pccard.debug: 0 hw.cbb.debug: 0 hw.cbb.start_32_io: 4096 hw.cbb.start_16_io: 256 hw.cbb.start_memory: 2281701376 hw.pcic.pd6722_vsense: 1 hw.pcic.intr_mask: 57016 hw.pci.honor_msi_blacklist: 1 hw.pci.enable_msix: 1 hw.pci.enable_msi: 1 hw.pci.do_power_resume: 1 hw.pci.do_power_nodriver: 0 hw.pci.enable_io_modes: 1 hw.pci.host_mem_start: 2147483648 hw.syscons.kbd_debug: 1 hw.syscons.kbd_reboot: 1 hw.syscons.bell: 1 hw.syscons.saver.keybonly: 1 hw.syscons.sc_no_suspend_vtswitch: 0 hw.usb.uplcom.interval: 100 hw.usb.uvscom.interval: 100 hw.usb.uvscom.opktsize: 8 hw.wi.debug: 0 hw.wi.txerate: 0 hw.xe.debug: 0 hw.intr_storm_threshold: 1000 hw.availpages: 127714 hw.bus.devctl_disable: 0 hw.ste.rxsyncs: 0 hw.busdma.total_bpages: 32 hw.busdma.zone0.total_bpages: 32 hw.busdma.zone0.free_bpages: 32 hw.busdma.zone0.reserved_bpages: 0 hw.busdma.zone0.active_bpages: 0 hw.busdma.zone0.total_bounced: 0 hw.busdma.zone0.total_deferred: 0 hw.busdma.zone0.lowaddr: 0xffffffff hw.busdma.zone0.alignment: 2 hw.busdma.zone0.boundary: 65536 hw.clockrate: 2808 hw.instruction_sse: 1 hw.apic.enable_extint: 0 hw.kbd.keymap_restrict_change: 0 hw.acpi.supported_sleep_state: S3 S4 S5 hw.acpi.power_button_state: S5 hw.acpi.sleep_button_state: S3 hw.acpi.lid_switch_state: NONE hw.acpi.standby_state: S1 hw.acpi.suspend_state: S3 hw.acpi.sleep_delay: 1 hw.acpi.s4bios: 0 hw.acpi.verbose: 0 hw.acpi.disable_on_reboot: 0 hw.acpi.handle_reboot: 0 hw.acpi.cpu.cx_lowest: C1 Processes $ ps ax PID TT STAT TIME COMMAND 0 ?? DLs 0:00.00 [swapper] 1 ?? ILs 0:00.00 /sbin/init -- 2 ?? DL 0:00.08 [g_event] 3 ?? DL 0:00.29 [g_up] 4 ?? DL 0:00.33 [g_down] 5 ?? DL 0:00.00 [crypto] 6 ?? DL 0:00.00 [crypto returns] 7 ?? DL 0:00.00 [xpt_thrd] 8 ?? DL 0:00.00 [kqueue taskq] 9 ?? DL 0:00.00 [acpi_task_0] 10 ?? RL 34:12.42 [idle: cpu0] 11 ?? WL 0:01.13 [swi4: clock sio] 12 ?? WL 0:00.00 [swi3: vm] 13 ?? WL 0:00.00 [swi1: net] 14 ?? DL 0:00.04 [yarrow] 15 ?? WL 0:00.00 [swi6: task queue] 16 ?? WL 0:00.00 [swi2: cambio] 17 ?? DL 0:00.00 [acpi_task_1] 18 ?? DL 0:00.00 [acpi_task_2] 19 ?? WL 0:00.00 [swi5: +] 20 ?? DL 0:00.01 [thread taskq] 21 ?? WL 0:00.00 [swi6: Giant taskq] 22 ?? WL 0:00.00 [irq9: acpi0] 23 ?? WL 0:00.09 [irq14: ata0] 24 ?? WL 0:00.11 [irq15: ata1] 25 ?? WL 0:00.57 [irq11: ed0 uhci0] 26 ?? DL 0:00.00 [usb0] 27 ?? DL 0:00.00 [usbtask-hc] 28 ?? DL 0:00.00 [usbtask-dr] 29 ?? WL 0:00.01 [irq1: atkbd0] 30 ?? WL 0:00.00 [swi0: sio] 31 ?? DL 0:00.00 [sctp_iterator] 32 ?? DL 0:00.00 [pagedaemon] 33 ?? DL 0:00.00 [vmdaemon] 34 ?? DL 0:00.00 [idlepoll] 35 ?? DL 0:00.00 [pagezero] 36 ?? DL 0:00.01 [bufdaemon] 37 ?? DL 0:00.00 [vnlru] 38 ?? DL 0:00.14 [syncer] 39 ?? DL 0:00.01 [softdepflush] 1221 ?? Is 0:00.00 /sbin/devd 1289 ?? Is 0:00.01 /usr/sbin/syslogd -ss -f /var/etc/syslog.conf 1608 ?? Is 0:00.00 /usr/sbin/cron -s 1692 ?? Ss 0:00.03 /usr/local/sbin/mDNSResponderPosix -b -f /var/etc/mdn 1730 ?? S 0:00.43 /usr/local/sbin/lighttpd -f /var/etc/lighttpd.conf -m 1882 ?? DL 0:00.00 [system_taskq] 1883 ?? DL 0:00.00 [arc_reclaim_thread] 4139 ?? S 0:00.03 /usr/local/bin/php /usr/local/www/exec.php 4144 ?? S 0:00.00 sh -c ps ax 4145 ?? R 0:00.00 ps ax 1816 v0 Is 0:00.01 login [pam] (login) 1818 v0 I+ 0:00.03 -tcsh (csh) 1817 v1 Is+ 0:00.00 /usr/libexec/getty Pc ttyv1 1402 con- I 0:00.00 /usr/local/sbin/afpd -F /var/etc/afpd.conf 1404 con- S 0:00.00 /usr/local/sbin/cnid_metad 1682 con- I 0:02.78 /usr/local/sbin/mt-daapd -m -c /var/etc/mt-daapd.conf 1789 con- S 0:00.18 /usr/local/bin/fuppesd --config-dir /var/etc --config Libvert snippet <domain type='kvm'> <name>freenas</name> <uuid>********-****-****-****-************</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> Is this possible? Ideally I'd like to be able to stop the host without having to manually deal with shutting down the VM.

    Read the article

  • Apache denying requests with VirtualHosts

    - by Ross
    This is the error I get in my log: Permission denied: /home/ross/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable My VirtualHost is pretty simple: <VirtualHost 127.0.0.1> ServerName jotter.localhost DocumentRoot /home/ross/www/jotter/public DirectoryIndex index.php index.html <Directory /home/ross/www/jotter/public> AllowOverride all Order allow,deny allow from all </Directory> CustomLog /home/ross/www/jotter/logs/access.log combined ErrorLog /home/ross/www/jotter/logs/error.log LogLevel warn </VirtualHost> Any ideas why this is happening? I can't see why Apache is looking for a .htaccess there and don't know why this should stop the request. Thanks.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • IIS 7.5 Siteminder is not protecting ASP.net MVC requests

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • DNS requests failing from computers that can ping DNS server

    - by dunxd
    I have a situation where computers in some of our remote offices from time to time lose the ability to use our DNS server (in head office) to resolve hostnames. The offices are connected via VPN using Cisco ASA 5505 (VPNclient config rather than Site to Site). Ping to the IP address of the DNS server works. But nslookup will get a "no response from server" message. Computers in other locations can use DNS fine. This is an intermittent problem. One day/hour it works, another it doesn't. Other offices connected in the same way work when another doesn't. No config changes have been made on routers around the time we see the problem. Some users have reported that the problem goes away after doing a repair connection in Windows XP. I think this could be caused by the DNS cache being flushed as part of this - the Windows DNS cache makes the intermittent problem look less so because it caches failed lookups as well as successful ones. However, it is possible some other aspect of Windows is involved. Windows 7 clients have also had the same problem. Any pointers on deeper troubleshooting, or anyone else found this?

    Read the article

  • redirect all youtube video requests to a specific one

    - by iTayb
    I'm on an IT team in my company and I would like to block youtube to users. I don't want to just deny access to the whole youtube domain, but only to replace the .flv/.mp4 request with the one that I want. That way, if someone tries to watch youtube videos on the network, He'll get a video of why using our expensive bandwidth for pleasure is a no-no. I thought about using a packet manipulation program and just replace the video ID with something that I want, but I didn't manage to do it right.

    Read the article

  • I need a server or service that reroutes DNS requests

    - by Relentim
    We have two external servers, Dev and Prod. We are a software house and in the code we have a subdomain metrics.company.com that points to Prod. Development is continuous and our internal and external developers and testers will need to switch from Dev to Prod and back again. It is not an option to have a different sub domain in the code during development and change this for production. The way we wish to switch between Dev and Prod is to use DNS. We need a public DNS server that behaves normally apart from routing metrics.company.com to Dev. The users will be able to swap their DNS back and forward to hit the different servers. What is the easiest way to do this? Is there a company that hosts this service or am I going to have to rent a server and set it up myself? Any help would be much appreciated.

    Read the article

  • associate dhcp requests with subdomains in dnsmasq

    - by Dezra
    I have dnsmasq running as a dns server with a number of linux boxes using static ips that run several virtual hosts on subdomains. I currently have the following address line in my dnsmasq.conf to map the subdomain of a boxes address to the boxes static ip: address=/.devbox1.mydomain.com/192.168.1.3 address=/.devbox2.mydomain.com/192.168.1.4 e.g. site1.devbox1.mydomain.com > maps to devbox1 static ip, site1 virtual host site2.devbox1.mydomain.com > maps to devbox1 static ip, site2 virtual host site3.devbox2.mydomain.com > maps to devbox2 static ip, site3 virtual host I was wondering if I can change the machines over to DHCP addresses (instead of static) and have dnsmasq use the dhcp ip instead of the static one? Can I modify the address line to refer to the DHCP address (obviously, I cant hardcode the address)? I know I could add mac address to ip allocation, but I want to avoid this if possible.

    Read the article

  • Running Sonatype Nexus in Tomcat 7.0, Tomcat blocking PUT requests

    - by gdm
    I was previously running Nexus 1.8 on OSX and uploading jars for releases without any issues. The OSX box died, so I moved to a FreeBSD server. Since Nexus doesn't have binaries for FreeBSD, I decided to run it in my Tomcat container. Now, I have set up Nexus 1.9 in Tomcat 7.0 on FreeBSD. Everything is working well, except I can't upload jars to my release or snapshot repositories. If I try via Hudson, I get a 401 error (and no further details). If I try manually via curl, I get an error message back from Tomcat: "This request requires HTTP authentication.". Why is Tomcat giving this error, and how do I stop it? If I look in the Nexus logs I can see that the PUT request doesn't even reach Nexus, Tomcat is intercepting it.

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • 502 Bad Gateway error after failed requests using Passenger

    - by Nicolas Buduroi
    I've got a staging server running nginx 1.0.5 using Rails 3.1 under Passenger 3.0.9. The problem is that a request sent just after one where there's an application error return 502 Bad Gateway. To test it I've set up a simple controller with an action that just raise a dummy exception. One request will show the Rails error message and the next one will show nginx 502 Bad Gateway error, then it goes back to the Rails application error, etc. While investigating this problem I've found out that load testing the application (which increase the number of application processes) make that issue disapear. That is until the extra processes are shutdown, then it reappear. I've tried setting the passenger_min_instances option, but doing so doesn't change anything and in this case each time an application error happen one instance is killed while after load testing all instances are kept alive. P.S.: Some people on my team told me that they've seen the 502 error even when there's no application error but I've not been able to reproduce that. Update: Just found out how to display the responses status codes using ab and most of them are 502!

    Read the article

  • overriding default scheduler for blkio requests in cgroups

    - by Aamir Mushtaq
    I am trying to optimize a set of servers that have to reside on single machine. i.e. i can have multiple application server, a DB server and of course a samba server as well in same instance. Now I was looking into several optimizing options available to me. In my quest, i did my tuning of the network stack. coming to the CPU, MEMORY and the BLKIO tweaks, i am using CGROUPS. The problem i am facing is that for enhanced performance in the nature of the applications that i am running, the CFQ Scheduler that is implemented for the BLKIO subsystem is not optimal. I was looking more for a Deadline Scheduler because that will serve my purpose well. My question is whether it is possible for us to change the scheduler in the kernel compilation itself for the BLKIO to Deadline and it will reflect in my usage of [CGROUP hierarchies][3]? Since when running the service cgconf, a new fs is mounted and i dont want it to revert to CFQ scheduler. I also welcome any suggestions that will enable me to have more control over my resources.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >