Search Results

Search found 5259 results on 211 pages for 'interrupt handling'.

Page 158/211 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Problems with X11GraphicsDevice on Suse 11

    - by Daniel
    Hi, On servers running Suse 11 I'm experiencing hangups in sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) when connecting via Citrix (and setting DISPLAY to localhost:11.0). Running exactly the same code in exactly the same environment, excepth through Exceed (with DISPLAY set to my workstation's IP) it runs like clockwork. The error is not intermittent, it happens every time Reinstalling the OS does not help Can not reproduce it on Suse 10 This is what the main thread stack looks like: [junit] "main" prio=10 tid=0x0000000040112000 nid=0x6acc runnable [0x00002b9f909ae000] [junit] java.lang.Thread.State: RUNNABLE [junit] at sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) [junit] at sun.awt.X11GraphicsDevice.makeDefaultConfiguration(X11GraphicsDevice.java:208) [junit] at sun.awt.X11GraphicsDevice.getDefaultConfiguration(X11GraphicsDevice.java:182) [junit] - locked <0x00002b9fed6b8e70 (a java.lang.Object) [junit] at sun.awt.X11.XToolkit.(XToolkit.java:92) [junit] at java.lang.Class.forName0(Native Method) [junit] at java.lang.Class.forName(Class.java:169) [junit] at java.awt.Toolkit$2.run(Toolkit.java:834) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:826) [junit] - locked <0x00002b9f94b8ada0 (a java.lang.Class for java.awt.Toolkit) [junit] at java.awt.Toolkit.getEventQueue(Toolkit.java:1676) [junit] at java.awt.EventQueue.invokeLater(EventQueue.java:954) [junit] at javax.swing.SwingUtilities.invokeLater(SwingUtilities.java:1264) ... Has anyone experienced something similar? Could this be a problem in Suse 11's display handling? I'm thankful for any input at this point - I'm fresh out of ideas :)

    Read the article

  • linux routing bug?

    - by Balázs Pozsár
    I have been struggling with this not easily reproducible issue since a while. I am using linux kernel v3.1.0, and sometimes routing to a few IP addresses does not work. What seems to happen is that instead of sending the packet to the gateway, the kernel treats the destination address as local, and tries to gets its MAC address via ARP. For example, now my current IP address is 172.16.1.104/24, the gateway is 172.16.1.254: # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:1B:63:97:FC:DC inet addr:172.16.1.104 Bcast:172.16.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230772 errors:0 dropped:0 overruns:0 frame:0 TX packets:171013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:191879370 (182.9 Mb) TX bytes:47173253 (44.9 Mb) Interrupt:17 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.1.254 0.0.0.0 UG 0 0 0 eth0 172.16.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 I can ping a few addresses, but not 172.16.0.59: # ping -c1 172.16.1.254 PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data. 64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.383 ms --- 172.16.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms root@pozsybook:~# ping -c1 172.16.0.1 PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data. 64 bytes from 172.16.0.1: icmp_seq=1 ttl=63 time=5.54 ms --- 172.16.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 5.545/5.545/5.545/0.000 ms root@pozsybook:~# ping -c1 172.16.0.2 PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data. 64 bytes from 172.16.0.2: icmp_seq=1 ttl=62 time=7.92 ms --- 172.16.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.925/7.925/7.925/0.000 ms root@pozsybook:~# ping -c1 172.16.0.59 PING 172.16.0.59 (172.16.0.59) 56(84) bytes of data. From 172.16.1.104 icmp_seq=1 Destination Host Unreachable --- 172.16.0.59 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms When trying to ping 172.16.0.59, I can see in tcpdump that an ARP req was sent: # tcpdump -n -i eth0|grep ARP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 15:25:16.671217 ARP, Request who-has 172.16.0.59 tell 172.16.1.104, length 28 and /proc/net/arp has an incomplete entry for 172.16.0.59: # grep 172.16.0.59 /proc/net/arp 172.16.0.59 0x1 0x0 00:00:00:00:00:00 * eth0 Please note, that 172.16.0.59 is accessible from this LAN from other computers. Does anyone have any idea of what's going on? Thanks. update: replies to the comments below: there are no interfaces besides eth0 and lo the ARP req cannot be seen on the other end, but that's how it should work. the main problem is that an ARP req should not even be sent at the first place the problem persist even if I add an explicit route with the command "route add -host 172.16.0.59 gw 172.16.1.254 dev eth0"

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • unable to sniff traffic despite network interface being in monitor or promiscuous mode

    - by user65126
    I'm trying to sniff out my network's wireless traffic but am having issues. I'm able to put the card in monitor mode, but am unable to see any traffic except broadcasts, multicasts and probe/beacon frames. I have two network interfaces on this laptop. One is connected normally to 'linksys' and the other is in monitor mode. The interface in monitor mode is on the right channel. I'm not associated with the access point because, as I understand, I don't need to if using monitor mode (vs promiscuous). When I try to ping the router ip, I'm not seeing that traffic show up in wireshark. Here's my ifconfig settings: daniel@seasonBlack:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:29:9e:b2:89 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:112 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8518 (8.5 KB) TX bytes:8518 (8.5 KB) wlan0 Link encap:Ethernet HWaddr 00:21:00:34:f7:f4 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::221:ff:fe34:f7f4/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:9758 errors:0 dropped:0 overruns:0 frame:0 TX packets:4869 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3291516 (3.2 MB) TX bytes:677386 (677.3 KB) wlan1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-33-34-00-00-00-00-00-00-00-00 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:1500 Metric:1 RX packets:112754 errors:0 dropped:0 overruns:0 frame:0 TX packets:101 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18569124 (18.5 MB) TX bytes:12874 (12.8 KB) wmaster0 Link encap:UNSPEC HWaddr 00-21-00-34-F7-F4-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wmaster1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Here's my iwconfig settings: daniel@seasonBlack:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. wmaster0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:F8:D6:17:34 Bit Rate=54 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Noise level=-69 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 wmaster1 no wireless extensions. wlan1 IEEE 802.11bg Mode:Monitor Frequency:2.437 GHz Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Here's how I know I'm on the right channel: daniel@seasonBlack:~$ iwlist channel lo no frequency information. eth0 no frequency information. wmaster0 no frequency information. wlan0 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6) wmaster1 no frequency information. wlan1 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6)

    Read the article

  • Development on Windows 7; Web server on Linux - How to share Apache web root?

    - by TheKeys
    I've got a LAMP server that I want to use as a local web server. I've got a Windows 7 machine that I want to use as my development machine. The machines will be on the same LAN (or the Windows box will be VPNed into the LAN). My questions is, what is the best way of sharing the web root of the LAMP server so that I can edit the files on the remote Windows 7 machine and how do I go about configuring this on the Linux machine? (Fedora 16) I would like the solution to be as easy to use as possible with preferably no extra steps required to save/edit/upload files from my IDE on my Windows 7 machine. I'm thinking either a Samba or NFS share are the way to go but I'm concerned I'm going to run into issues with permissions and unix/windows file handling. Is one better than ther other for my use case or is there a better alternative solution? I'm currently using Windows 7 Professional which doesn't have NFS support but would upgrade to Ultimate which does have NFS support if it's the best solution.

    Read the article

  • APACHE - PHP - Bounce emails

    - by user1179459
    I want to improve our mail server lists by handling all the bounces we get in our websites. I have a website which has over 8000+ users and another website which as over 1500+ users, they are emailed various notifications every second, ie. job alerts, email alerts, I am using POP connections with EXIM on APACHE server, most scripts are based on PHP language generates email on the fly. Problems i have But some users has registered long time ago and now quite few has bounce email address Some users register with dummy emails like [email protected] which never existed but a valid email address, any chance of stopping this without asking them login to the email account and clicking links which dont work most times.. (too annoying to the end user) Server is sending unnecessary emails can be avoided if i know they dont exists ? Solutions i need to have Is there a way i can download the bounce email list somewhere (WHM/Cpanel), i know exim mail has it but its not readable (i need a file like CSV or something similar to scan them over and write a php script to delete the users from the database ?) I need to know if there is any function in the PHP that can check the existence of the email address on the fly ? so that i can set the email send function in the mailer class to check before it sends out. On the server will bouncing emails are going to eatup lots of server resources ? like memory/cpu on processing them ? or are they are minimal where we dont have to worry about this at all ? May be a opensoruce or linux software to capture them and view them in a report basis and cleaning them up ? I am not a linux expert or server admin but i do lot of PHP coding, so please be descriptive of the solutions specially if they are linux commands Thank you!

    Read the article

  • CouchDB Errors: "undefined symbol: js_fgets" on Ubuntu 10.04

    - by MattEzell
    Hello, ServerFault Community. I have been wrestling with this problem for weeks without resolution and was hoping that someone here might be able to help. As indicated above, this is in Ubuntu 10.04 (x86) using CouchDB 0.11.0. I have build and installed CouchDB 0.11.0 from source. Everything with the dependencies and with the CouchDB install itself 'goes off without a hitch' - no errors or complaints in the configure, make or install... CouchDB seems to be running 'fine'. I can access Futon without issue and utilize all CouchDB functionality found in Futon... Unfortunately, when I attempt to use any shows/views for an installed Couch App, I get the above js_fgets error before terminal (and the Couch log) fills up with TONS of JSON. Nothing ever renders in the browser, though Firebugs reports. I have used the official instructions (paying special attention to the 10.04 instructions) and have followed pretty much every Google thread that I can find on similar issues. I have chase SpiderMonkey (and Rhino) as well as Erlang as the culprit, but despite reinstalls and tests with these components, I still cannot get past this CouchDB issue on my system... Ideas? Pointers? Suggestions? Has anyone successfully installed and used CouchDB 0.11.0 on an Ubuntu 10.04 system to RUN APPLICATIONS? I have come across multiple individuals who immediately respond 'yes, I have it installed - it works great' only to have them realize in the end (as I did) that just because Futon thinks things are working, doesn't mean CouchDB is properly handling ALL requests. Thank you for your time and assistance!

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

  • Throughput; capacity planning help for C10K like design

    - by z8000
    I am designing a network service in which clients connect and stay connected -- the model is not far off from IRC less the s2s connections. I could use some help understanding how to do capacity planning, in particular with the system resource costs associated with handling messages from/to clients. There's an article that tried to get 1 million clients connected to the same server [1]. Of course, most of these clients were completely idle in the test. If the clients sent a message every 5 seconds or so the system would surely be brought to its knees. But... How do you do less hand-waving and you know, measure such a breaking point? We're talking about messages being sent by a client over a TCP socket, into the kernel, and read by an application. The data is shuffled around in memory from one buffer to another. Do I need to consider memory throughput ("5 GT/s" [2], etc.)? I'm pretty sure I have the ability to measure the basic memory requirements due to TCP/IP buffers, expected bandwidth, and CPU resources required to process messages. I'm a little dim on what I'm calling "thoughput". Help! Also, does anyone really do this? Or, do most people sort of hand-wave and see what the real world offers, and then react appropriately? [1] http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/ [2] http://en.wikipedia.org/wiki/GT/s

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • How to stop my VPS from picking up ARP reqs it is not supposed to?

    - by Charles Stewart
    Machine: Xen-3.0 image running stable Debian Linux 2.6.18, pretty vanilla. My VPS provider asks me to deal with some trouble my image is causing, namely handling IP addresses it is not supposed to: The problem is that your server seems to be configured to use IPs that have not been appointed to you. Your server responds to ARP requests for the IPs 81.171.111.219 and 81.171.111.218. But you are not allowed to use those. Not explicitly, as far as I can tell! At least, nothing under /etc or /var/tmp mentions these IP addresses. But arp -v says something I can't make sense of: Address HWtype HWaddress Flags Mask Iface 81.171.111.1 ether 00:0C:DB:E3:80:00 C eth0 Entries: 1 Skipped: 0 Found: 1 What is it listening to? The possibilities seem to be: It's not my fault: my VPS providers have overlooked something. What might that be? 81.171.111.1 means I'm happy listening in on ARP requests that I shouldn't be: how do I change this? In any case, what does this mean? I'm looking in completely the wrong place for information on what my image is doing. Where should I be looking?

    Read the article

  • KVM + Cloudmin + IpTables

    - by Alex
    I have a KVM virtualization on a machine. I use Ubuntu Server + Cloudmin (in order to manage virtual machine instances). On a host system I have four network interfaces: ebadmin@saturn:/var/log$ ifconfig br0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 inet addr:192.168.0.253 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::1278:d2ff:feec:1638/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:589337 errors:0 dropped:0 overruns:0 frame:0 TX packets:334357 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:753652448 (753.6 MB) TX bytes:43385198 (43.3 MB) br1 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet addr:192.168.10.1 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16995 errors:0 dropped:0 overruns:0 frame:0 TX packets:13309 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2059264 (2.0 MB) TX bytes:1763980 (1.7 MB) eth0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:610558 errors:0 dropped:0 overruns:0 frame:0 TX packets:332382 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:769477564 (769.4 MB) TX bytes:44360402 (44.3 MB) Interrupt:20 Memory:fe400000-fe420000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:239632 errors:0 dropped:0 overruns:0 frame:0 TX packets:239632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50738052 (50.7 MB) TX bytes:50738052 (50.7 MB) tap0 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17821 errors:0 dropped:0 overruns:0 frame:0 TX packets:13703 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2370468 (2.3 MB) TX bytes:1782356 (1.7 MB) br0 is connected to a real network, br1 is used to create a private network shared between guest systems. Now I need to configure iptables for network access. First of all I allow ssh sessions on port 8022 on the host system, then I allow all connections in state RELATED, ESTABLISHED. This is working ok. I install another system as guest, it's IP address is 192.168.10.2, and now I have two problems: I want to allow the access from this host to the outside world, cannot accomplish this. I can ssh from the host. I want to be able to ssh to the guest from the outside world using 8023 port. Cannot accomplish this. Full iptables configuration is following: ebadmin@saturn:/var/log$ sudo iptables --list [sudo] password for ebadmin: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:8022 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED LOG all -- anywhere anywhere LOG level warning Chain FORWARD (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning Chain OUTPUT (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning ebadmin@saturn:/var/log$ sudo iptables -t nat --list Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp spt:8023 to:192.168.10.2:22 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination The worst of all is that I don't know how to interpret iptables logs. I don't see the final decision of the firewall. Need help urgently.

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • Yahoo marked my mail as spam and says domainkey fails

    - by mGreet
    Hi Yahoo is marking our mail as spam. We are using PHP Zend framework to send the mail. Mail header says that Domain Key is failed. Authentication-Results: mta160.mail.in.yahoo.com from=mydomain.com; domainkeys=fail (bad sig); from=mydomain.com; dkim=pass (ok) We configured our SMTP server (Same server used to send mail from zend framework.) in outlook and send the mail to yahoo. This time yahoo says domainkeys is pass. Authentication-Results: mta185.mail.in.yahoo.com from=speedgreet.com; domainkeys=pass (ok); from=speedgreet.com; dkim=pass (ok) Domainkey is added in mail header on our server which is used by both outlook client and PHP client. yahoo recognize the mail which is sent from outlook and yahoo does not recognize the mail from PHP client. As far as I know, Signing the email is done on the server side with help of domain key. PHP and Outlook uses the same server to sign the mail. But why yahoo handling differently? What I am missing here? Any Idea? Can anyone help me?

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • vconfig created virtual interface and trunking - is the the interface untagged or tagged for that VLAN ID?

    - by kce
    I am trying to setup an additional VLAN on our Debian-based router/firewall (which exists as a virtual machine on Hyper-V), our core switch (an HP Procurve 5406) and a remote HP ProCurve 2610 that is connected via a WAN Transparent Lan Service (TLS) link. Let's work backwards from the network edge: The Debian server has an external connection attached to eth0. The internal interface is eth1, which is connected directly from our Hyper-V host to the 5406. The port that eth1 is attached to is setup as Trk12. The 2610 is attached to Trk9 (which trunks a whole slew of VLANs - Trk9 is our TLS head). I can successfully ping the management IP addresses for my VLAN from both switches but I cannot ping, from either switch, the virtual interface for my new VLAN on the Debian-base router and firewall. The existing VLAN works fine. What gives? The port eth1 is attached to is a trunk, the existing VLAN (ID 98) is untagged on the trunk, the new VLAN (ID 198) is tagged. VLAN 198 is tagged on Trk9 on the 5406 and on the 2610. I can ping the other switch's management IP (10.100.198.2 and 10.100.198.3) from the other respective switch. That leg of the VLAN works - however I cannot communicate with eth1.198's 10.100.198.1. I feel like I'm missing something elementary but what it is remains illusive to me. I suspect the issue is with the vconfig created eth1.198. It should pass the tagged VLAN 198 packets correct? But they cannot seem to get any further than the 5406. Communication on the existing VLAN 98 works fine. From the Debian box: eth1: eth1 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.0.1 Bcast:10.100.255.255 Mask:255.255.0.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12179786 errors:0 dropped:0 overruns:0 frame:0 TX packets:20210532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1586498028 (1.4 GiB) TX bytes:26154226278 (24.3 GiB) Interrupt:9 Base address:0xec00 eth1.198: eth1.198 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.198.1 Bcast:10.100.198.255 Mask:255.255.255.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1496 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:72 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:3528 (3.4 KiB) # cat /proc/net/vlan/eth1.198: eth1.198 VID: 198 REORDER_HDR: 0 dev->priv_flags: 1 total frames received 0 total bytes received 0 Broadcast/Multicast Rcvd 0 total frames transmitted 72 total bytes transmitted 3528 total headroom inc 0 total encap on xmit 39 Device: eth1 INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0 EGRESS priority mappings: # ip route 10.100.198.0/24 dev eth1.198 proto kernel scope link src 10.100.198.1 206.174.64.0/20 dev eth0 proto kernel scope link src 206.174.66.14 10.100.0.0/16 dev eth1 proto kernel scope link src 10.100.0.1 default via 206.174.64.1 dev eth0 # iptables -L -v Chain INPUT (policy DROP 6875 packets, 637K bytes) pkts bytes target prot opt in out source destination 41 4320 ACCEPT all -- lo any anywhere anywhere 11481 1560K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 107 8058 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- eth1 any 10.100.0.0/24 anywhere tcp dpt:ssh 701 317K ACCEPT udp -- eth1 any anywhere anywhere udp dpts:bootps:bootpc Chain FORWARD (policy DROP 1 packets, 40 bytes) pkts bytes target prot opt in out source destination 156K 25M ACCEPT all -- eth1 any anywhere anywhere 215K 248M ACCEPT all -- eth0 eth1 anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1.198 any anywhere anywhere 0 0 ACCEPT all -- eth0 eth1.198 anywhere anywhere state RELATED,ESTABLISHED Chain OUTPUT (policy ACCEPT 13048 packets, 1640K bytes) pkts bytes target prot opt in out source destination From the 5406: # show vlan ports trk12 detail Status and Counters - VLAN Information - for ports Trk12 VLAN ID Name | Status Voice Jumbo Mode ------- -------------------- + ---------- ----- ----- -------- 98 WIFI | Port-based No No Untagged 198 VLAN198 | Port-based No No Tagged

    Read the article

  • Using Varnish (only) for DDoS mitigation

    - by Martin Kanters
    My VPS is suffering from a (D)DoS doing a SYN flood with spoofed IPs. I'm right now searching from ways how to be able to defend (at least a bit) against it. It's running a DirectAdmin apache2 webserver. Mainly used for serving PHP and MySQL. We are using CloudFlare, which are saying that they are able to mitigate (D)DoS at some level, now the attacker knows our real IP address, so CloudFlare isn't helping a bit. I've done some searching on the net and found out about enabling SYN cookies, to defend against it. I've checked my settings and it seems it was enabled all along. I've also read about that Varnish is able to defend against SYN flooding and Slowloris attacks, now I'm pretty interested in using that. The thing is that CloudFlare is already caching a lot from us, and I don't wish to spend too much resources on Varnish. Is it possible and smart to set up Varnish only for the better handling of requests? Are there perhaps better ways which I've missed? Thanks in advance, Martin

    Read the article

  • NRPE: Unable to read output with check_connections plugin

    - by Wlodzimierz
    I'm using plugin which gives me warning or crtis with established connections. If I run it on local machine it gives: *root@graber:/usr/lib/nagios/plugins# ./check_connections -w 1 -c 5 -C sshd CRITICAL Established connections: 6* I know, I run as root. But: Rights to the file: root@graber:/usr/lib/nagios/plugins# ls -all check_connections -rwxr-xr-x 1 nagios nagios 5459 2012-07-06 10:19 check_connections /etc/sudoers: root@graber:/usr/lib/nagios/plugins# cat /etc/sudoers Defaults env_reset root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/bin/lsof nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ /etc/nagios/nrpe.cfg: *nrpe_user=nagios nrpe_group=nagios* *dont_blame_nrpe=1* *command_prefix=/usr/bin/sudo command[check_connections]=/usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd* log from remote: *2012-07-06T11:12:49+02:00 graber nrpe[25928]: Handling the connection... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host address is in allowed_hosts 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host is asking for command 'check_connections' to be run... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Running command: /usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd 2012-07-06T11:19:11+02:00 graber nrpe[26100]: Return Code: 2, Output: NRPE: Unable to read output* Why is this happening? I'm out of ideas, I've searched google for 2 days now :)

    Read the article

  • md/raid:md2: cannot start dirty degraded array, kernel panic

    - by nl-x
    After having made use of a remote power switch, my server did not come back online. When I went to the datacenter and reboot the computer on the spot I see the server booting (I see the centos progress bar with running almost all the way to the end) and eventually giving the following messages: md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted 2.6.32-279.1.1.el6.i686 #1 Call Trace: [<c083bfbc>] ? panic+0x68/0x11c [<c045a501>] ? do_exit+0x741/0x750 [<c045a54c>] ? do_group_exit+0x3c/0xa0 [<c045a5c1>] ? sys_exit_group+0x11/0x20 [<c083eba4>] ? syscall_call+0x7/0xb [<c083007b>] ? cmos_wake_setup+0x62/0x112 The server runs CentOS and has software raid, and I don't have backups of the raid settings. The only backup I have is of /home and the database dumps. (Glad to at least have those though.) Since the server is an old Dell PowerEdge 1750 with no CD-ROM drive, I have no way of booting the machine from a boot disk. I also remember in the past that the server also wouldn't boot from a bootable USB disk. So the only way I know how to boot the server is to go to the datacenter, pick up the server and take it to the office. Screw open the server. Attach a cdrom drive to an IDE slot on the motherboard. And then boot it. I am hoping you guys could help me avoid this. I have looked a bit through the boot options and I found the following boot options. When CentOS is about to boot and interrupt the boot-countdown: CentOS (2.6.32-279.1.1.el63.i686) CentOS Linux (2.6.32-71.29.1.el6.i686) centos (2.6.32-71.el6.i686) I think the first configuration is the default one, because choosing that gets me to the above mentioned kernel panic. The other ones end with something like "Sleeping forever". I can press 'e' to edit boot commands, press 'a' to modify kernel arguments and press 'c' for grub command line. The command line gives a grub prompt. But I have no idea how to get the system to boot without (trying to) access the dirty partitions. What I want to do is off course: - boot the machine - check hard drive for errors - mark the drive as clean

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >