Search Results

Search found 3817 results on 153 pages for 'extjs direct'.

Page 108/153 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • Add Network Printer drivers in Windows 7/Server 2008 R2?

    - by Matias Nino
    I'm running a 64 bit Windows 7 / Windows 2008 R2 workstation that I just installed. I need to add a printer that is shared on the network from a 32bit Windows 2000 print server. This is an HP LaserJet 5Si printer, the drivers for which HP tells me are automatically built into Windows 7/R2. However, whenever I connect to the printer or try to add it, I get the following screen: Upon clicking OK, I get this screen asking me to locate the driver: How can I possibly locate a driver that is SUPPOSED TO BE NATIVELY SUPPORTED on Windows 7/R2? The tough part is that this printer is one of many shared on a server and does not have a direct IP address. Even worse: I have no access to the print server so I cannot put the 64 bit drivers on there. Any ideas? UPDATE: HP doesn't make a Vista driver either. It claims it is natively supported by Vista and 7, which is true because I am able to create a local printer on a fake tcp/ip port and Windows lets me pick the proper driver. However, when adding from the network, Windows does not let me select a driver and demands an INF. I tried searching the entire sub-structure of the C:\Windows directory and could not find any INF files that contain HP information. The INF might be located somewhere in the Windows installation DVD, but all the files on the DVD are compressed and unrecognizable. UPDATE #2 I installed the proper printer driver as a local printer (with no printer attached) and it installed. However, this did not change the fact that it STILL asks me to provide drivers when connecting to the networked printer.

    Read the article

  • Address (url) forwarding with Vyatta

    - by Trikks
    Got this kind of noob question i suppose. I got this very basic network setup and need help to set up some address forwarding. As seen in my illustration below all traffic enters via the eth0 interface (85.123.32.23). The external dns is setup to direct all hosts to this ip as well. Now, how on earth do I filter the incoming requests to each box? The Ip's are static! My network layout: I do not wish to solve this by assigning tons of ports etc. In my wishful thinking something like this would be nice :) set service nat rule 10 type destination set service nat rule 10 inbound-interface eth0 set service nat rule 10 destination address ftp.myhost.com set service nat rule 10 inside-address address 192.168.100.20 This way ALL traffic to the address ftp.myhost.com (at eth0) should be routed to the internal ip, 192.168.100.20. Right, is there anyone who could point in some direction? Maybe it's wrong to use nat? Please help me! :)

    Read the article

  • exim4 - disable autoreplies about "SMTP error from remote mail server after RCPT"

    - by osgx
    Hello I have a setup of exim4 on domain1 in front of other server, domain2 (with sendmail). Second server have no direct access to internet, so domain1 is MX for domain2. And domain2 is set as hubbed_host in the exim4 on domain1. When spammer sends message for no_such_user@domain2, its sendmail do a reject: 550 5.1.1 <no_such_user@domain2>... User unknown Then, exim4 at domain1 do an auto-reply like this: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: no_such_user@domain2 SMTP error from remote mail server after RCPT TO:<no_such_user@domain2>: host 10.0.0.1 [10.0.0.1]: 550 5.1.1 <no_such_user@domain2>... User unknown The spammers uses fake "from" field, and such generated messages are frozen by exim for a long time. How can I disable some or all autoreplies from exim4? Ideally, I want a filter, if message was not delivered with error "User unknown" than don't generate any autoreply from mailer-daemon. Thanks!

    Read the article

  • Modem doesn't seem to pass internet connection to router

    - by Vian Esterhuizen
    I was supplied a Cisco DPC3825 modem by my ISP and use a Cisco E4200 as my main router. Today the Internet just stopped working, just out of the blue, I can't think of anything that would have triggered it. It's been working pretty good for the past month except for a few random blips here and there. After some trouble shooting I realized that a direct wired connection to the modem would get me Internet access but if I was wired to the router as I was before I would have no connection. Assuming it was router I connected a TP-Link WR841N but it had the exact same problem. Also, connecting to the router via Wi-Fi from multiple devices will connect me to the router but I still can't get access to the Internet. From these test, it seems that the modem just won't send Internet through to the router but is clearly connecting and able to directly connect to my PC. What I've Tried A full reset and factory reset on the E4200 A full reset on the modem (but I believe my ISP has remote access to the modem because the passwords and so on are always set back). My ISP has remotely reconfigured the modem What I Should Try What else can I try? What should I do to try to narrow down the issue? Can you figure out what the problem might be based on this information?

    Read the article

  • weblogic plug-in apache http server location directive question

    - by user39510
    We are using Weblogic Portal and Apache 2.x http server with the weblogic plug-in for apache for load-balancing. We have an application that right now can only be accessed from one of our managed servers. What I would like to do is use the Location directive to direct all requests for that page to the one managed server and I can't get it to work. The context that the portal tries to forward to is something like /MyWebApp?portalusername= (where equals a legitimate user. For example /MyWebApp?portalusername=joesmith. All other applications and the plug-in is load balancing as expected because every now and then you'll get sent to the second managed server for this particular application and its not deployed. I tried various things in the Apache http.conf like the following but can't seem to get it work. Any suggestions? The following is a snippet of the httpd.conf. Its a standard out of the box httpd.conf file with the weblogic plugin configuration. <Location /MyWebApp> SetHandler weblogic-handler WebLogicCluster myserver:7011 </Location> <Location /?> SetHandler weblogic-handler WebLogicCluster myserver:7011, myserver2:7012 </Location>

    Read the article

  • How to tunnel a local port onto a remote server

    - by Trevor Rudolph
    I have a domain that i bought from DynDNS. I pointed the domain at my ip adress so i can run servers. The problem I have is that I don't live near the server computer... Can I use an ssh tunnel? As I understand it, this will let me access to my servers. I want the remote computer to direct traffic from port 8080 over the ssh tunnel to the ssh client, being my laptop's port 80. Is this possible? EDIT: verbose output of tunnel macbookpro:~ trevor$ ssh -R *:8080:localhost:80 -N [email protected] -v OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/trevor/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to site.com [remote ip address] port 22. debug1: Connection established. debug1: identity file /Users/trevor/.ssh/identity type -1 debug1: identity file /Users/trevor/.ssh/id_rsa type -1 debug1: identity file /Users/trevor/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'site.com' is known and matches the RSA host key. debug1: Found key in /Users/trevor/.ssh/known_hosts:9 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/trevor/.ssh/identity debug1: Trying private key: /Users/trevor/.ssh/id_rsa debug1: Offering public key: /Users/trevor/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: Remote connections from *:8080 forwarded to local address localhost:80 debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 8080, connect localhost:80 debug1: All remote forwarding requests processed

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • Update to Lion, Cannot boot into Bootcamp partitions, but can use in Parallels

    - by Jon Jester
    Using Snow Leopard had boot camp partitions for both XP and Windows 7. These were both accessible through Parallels 7 or through direct boot through boot camp. Each is on a separate partitioned hard drive. Upgraded to Lion, both were still accessible through Parallels, but have not been able to directly boot into either. Unfortunately is important to me to be able to boot into a least the Windows 7 partition. Have tried virtually everything I can find online. Seen similar issues, but nothing where they were usable virtually but not directly. Nothing works. reFit, correcting the master boot records in Windows with command line, have wiped the Windows 7 partition clean and reinstalled Windows 7 several times 1st using Boot Camp4 drivers then using Boot Camp3 drivers. Have tried resizing the bootcamp partitions. When booting into the Boot Camp partitions directly will go all the way to seeing the desktop before it fails, where I get a Windows error screen. I can see all the disks and their appropriate partitions both in OS X disk utility as well as the Windows installer utility.

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • Windows Server 2003 R2 SP2 GPO Conditional Terminal Services Client Redirection

    - by caleban
    We have a lot of mobile/home users with different client side printers attached. Most of these users don't need to print on the client side and we don't want all of these users Terminal Services sessions trying to map their client side printers and we don't want all of these drivers on the Terminal Server. What is the best way to set up around 90 users to have no client side printer redirection and 10 users to have client side printer redirection (to the printers attached to their home computers)? Do I need to create two separate OU's in AD one for redirection and one for no redirection and create two different policies one for each OU? One GPO with Client Server data redirection Do not allow client printer redirection disabled and one enabled? Is it preferrable instead to change each user's AD User Properties Enviroment Client devices Connect client printers at logon setting? Is there any for me to direct "ALL HP Printers" to a single HP Universal Printer Driver, "ALL Canon Printers" to a single Canon Universal Printer Driver, etc without specifying hundreds of unique printer names in the printsub.inf file? Thanks in advance.

    Read the article

  • switchless Infiniband between two servers on RHEL 6.3

    - by exfizik
    I have 2 servers running RHEL 6.3 which have 2 port Infiniband cards >lspci | grep -i infini 07:00.0 InfiniBand: QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02) I'm interested in connecting them directly to each other bypassing an Infiniband switch (which I don't have). Quick googling showed that at least in some configurations it's possible. I installed all RedHat Infiniband packages with yum groupinstall "Infiniband Support". However, ibv_devinfo shows that both ports in each card are down, which indicates that cables are not connected. But the cable is connected, although the LEDs are off on the cards (not a good sign). Another source of confusion for me is that according to this, RedHat doesn't come with OFED packages and I'm slightly hesitant to install them from source due to the lack of RedHat support for them... So where am I going with this? The questions I have are: is it possible to have a switchless/direct Infiniband connection between two servers the way I described above? If it's possible, do I have to use the OFED packages or can I configure everything with just the packages coming with RHEL. Why are the LEDs off on my servers even though the cable is connected? Any additional input/advice/pointers would be appreciated. P.S. I followed this guide for installation instructions. The Infiniband cards are clearly recognized by my OS and the rdma service is running. Update: I have opensm installed. When I run it it says: OpenSM 3.3.13 Command Line Arguments: Log File: /var/log/opensm.log ------------------------------------------------- OpenSM 3.3.13 Entering DISCOVERING state Using default GUID 0x1175000076e4c8 SM port is down and stays at that point.

    Read the article

  • Apache/Mongrel/Redmine installation problem (VirtualHost/ProxyPass)

    - by Riddler
    I am installing Redmine as per this step-by-step instruction: http://justnotes.co.cc/2010/02/11/how-to-install-redmine-on-ubuntu/ I am using Ubuntu 10.04.1, Apache 2.2.14, Mongrel 1.1.5. On the VirtualHost configuration stage, I am using this: <VirtualHost *:80> ServerName myserver.lv ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000 ProxyPreserveHost on <Proxy *> Order allow,deny Allow from all </Proxy> </VirtualHost> But, when I direct my browser to http://<my-server's-ip>/redmine/ what I see is not the redmine web application but "Index of /redmine" with, well, index of the files from the root directory of Redmine. Any idea how to fix that? P.S. Tried removing the VirtualHost stuff alltogether and instead adding the following simple clauses to apache2.conf: <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000/ ProxyPreserveHost on As a result, the behavior changes! Now http://<my-server's-ip>/redmine/ produces the source code of the Redmine's start page, so it is served, but apparently not rendered. At the same time, still, http://<my-server's-ip>:8000/ works perfectly fine, so Mongrel is serving the Redmine application as it should, it's just that something is wrong with my VirtualHost/proxying clauses in the .conf file.

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

  • VMware Workstation update stopped autofit guest working

    - by u8sand
    VMware one day offered an update (8.0.1 build-528992 my current version) from 8.0. I accepted it, because updates usually fix problems. However not in this case... Previously it worked very well. Now, it still "works" but the one glitch I'm getting makes it too hard to deal with. This screenshot will explain my problem: As you can see, my virtual PC is not resizing correctly. (This happens with any operating system), autofit guest just doesn't work - it only results in things like this happening. Thanks to the tools it becomes very hard to NOT autofit guest. I've tried uninstalling 8.0.1 completely and installing 8.0 again but with the same results. I don't really understand what the new update has done to VMware Workstation or to my virtual machines. I do believe this isn't VMware Workstation's direct fault but from VMware Tools, which would explain why going back to 8.0 didn't work since VMware tools has its own updates. How can I fix this? The host is running Windows 7 Ultimate 64-bit.

    Read the article

  • Forward differing hostnames to different internal IPs through NAT router

    - by abrereton
    Hi, I have one public IP address, one router and multiple servers behind the router. I would like to forward differing domains (All using HTTP) through the router to different servers. For example: example1.com => 192.168.0.110 example2.com => 192.168.0.120 foo.example2.com => 192.168.0.130 bar.example2.com => 192.168.0.140 I understand that this could be accomplished using Port Forwarding, but I need all hosts running on port 80. I found some information about IP Masquerading, but I found this difficult to understand, and I am not sure if it is what I am after. Another solution I have found is to direct all traffic to Reverse Proxy server, which forwards the requests onto the appropriate server. What about iptables? I am using a Billion 7404 VNPX router. Is there a feature that this router has that can accomplish this? Are these my only options? Have I missed something completely? Is one recommended over the others? I have searched around but I don't think I am hitting the correct keywords. Thanks in advance.

    Read the article

  • SSH dynamic port forwarding, "Connection refused"

    - by crodjer
    I am trying to do dynamic portforwarding using openssh through a remote computer following this command: ssh -D 6789 rohan@<remote_ip> -p <remote_port> This should set up a socks server on my comp as I assume. I am able to use this for normal browsing but can't connect to IRC or remote ssh (through proxychains). I get this error: channel 3: open failed: connect failed: Connection refused A high verbosity level output of the error: $ debug1: Connection to port 6789 forwarding to socks port 0 requested. debug2: fd 9 setting TCP_NODELAY debug2: fd 9 setting O_NONBLOCK debug3: fd 9 is O_NONBLOCK debug1: channel 3: new [dynamic-tcpip] debug2: channel 3: pre_dynamic: have 0 debug2: channel 3: pre_dynamic: have 4 debug2: channel 3: decode socks5 debug2: channel 3: socks5 auth done debug2: channel 3: pre_dynamic: need more debug2: channel 3: pre_dynamic: have 0 debug2: channel 3: pre_dynamic: have 10 debug2: channel 3: decode socks5 debug2: channel 3: socks5 post auth debug2: channel 3: dynamic request: socks5 host 4.2.2.2 port 53 command 1 debug3: Wrote 96 bytes for a total of 3335 channel 3: open failed: connect failed: Connection refused debug2: channel 3: zombie debug2: channel 3: garbage collecting debug1: channel 3: free: direct-tcpip: listening port 6789 for 4.2.2.2 port 53, connect from 127.0.0.1 port 33694, nchannels 4 debug3: channel 3: status: The following connections are open: #2 client-session (t4 r0 i0/0 o0/0 fd 6/7 cfd -1) debug3: channel 3: close_fds r 9 w 9 e -1 c -1 I googled for this too, but couldn't find any solutions.

    Read the article

  • Weird Apache Crash (with Dump) zend_hash_find (), libphp5.so

    - by Jacob84
    To be honest I don't have experience working with Apache. I'm just putting the best of my intentions on solving this and don't know if I'm making it right. So any help will be greatly appreciated. We have a php page wich is throwing the following message in the browser: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. The logs from /var/log/httpd doesn't seem to help because It seems that the Apache is unable to write any information. So the exception or error is preventing the writing (maybe ocurring in some stage of the process that makes impossible to log?). I've read about the procedure to make dumps of the apache, and here we have the content: Reading symbols from /lib64/libgpg-error.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libgpg-error.so.0 Reading symbols from /usr/lib64/php/modules/zip.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/php/modules/zip.so Core was generated by `/usr/sbin/httpd'. Program terminated with signal 11, Segmentation fault. 0 0x00007fb828fff712 in zend_hash_find () from /etc/httpd/modules/libphp5.so Missing separate debuginfos, use: debuginfo-install httpd-2.2.15-15.el6.centos.1.x86_64 I've been looking in the PHP files and I haven't found any direct call to zend_hash_find (wich seems to be causing the error). I've been looking at Google but found nothing related. Can somebody please help? Is there any step that I need to accomplish to know more? Thanks a lot, as always!

    Read the article

  • How to determine which ports are open/closed on a FIREWALL?

    - by Rahl
    It seems no one has asked this question before (most regard host-based firewalls). Anyone familiar with port scanning tools (e.g. nmap) knows all about SYN scanning, FIN scanning, and the like to determine open ports on a host machine. Question is though, how do you determine the open ports on a firewall itself (disregard whether the host you're trying to connect to behind the firewall has those particular ports open or closed). This is assuming the firewall is blocking your IP connection. Example: We all communicate with serverfault.com through port 80 (web traffic). A scan on a host would reveal port 80 is open. If serverfault.com is behind a firewall and still allows this traffic through, then we can assume the firewall has port 80 open also. Now let's assume the firewall is blocking you (e.g. your IP address is under the deny list or is missing in the allowed list). You know port 80 has to be open (it works for appropriate IP addresses), but when you (the disallowed IP) attempt any scanning, all port scan attempts on the firewall drop the packet (including port 80, which we know to be open). So, how might we accomplish a direct firewall scan to reveal open/closed ports on the firewall itself, while still using the disallowed IP?

    Read the article

  • Changing Domain Name DNS to Redirect web traffic to one server, and leave mail to original server

    - by David S
    Hi there, Ok, quite the idiot with DNS.. apart from the basics. I have a domain name hosted with a domain registrar. It seems to have full DNS control (i.e. ability to view/edit A Records, Mail etc..) We have recently setup a server at Rackspace which hosts the new website The original/existing server (where the old website still is and Mail) is on another shared hosting companies server I went to the domain name registrar, and checked out the DNS management as follows: click here to view the DNS screenshot So obviously the A Record is pointing to the actual server where the website/mail is I figure, and the CNAME is pointing (alias?) to the website url. So my question is this: If I want the web traffic portion to go to the Rackspace/new server, but keep the mail going to where it is now, what do I have to change? Also, should I even change this info at the domain registrar? the rackspace server account has full DNS which seems to suggest I can point to their nameservers and then re-direct the MX (Mail) traffic to where the mail server is? Sorry if that was a bit confusing.. obviously in need of DNS training ;) Any help very appreciated. David.

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing. EDIT: If I currently have been trying: sudo route flush sudo route add default 192.168.19.1 This gets everything to work for about a minute. But after such minute it "forgets" the routing to WiFi while retaining LAN's (en0) routing. If I unplug and replug my LAN (en0) cable, the process works for another minute.

    Read the article

  • Outside VPN traffic not able to ping site-to-site VPN remote site

    - by Siriss
    we have two ASA 5510s running 8.4 in a site-to-site VPN setup. All internal traffic is working smoothly. Site/Subnet A: 192.100.0.0 - local Site/Subnet B: 192.200.0.0 - remote VPN Users: 192.100.40.0 - assigned by ASA When you VPN into the network, all traffic hits Site A, and everything on subnet A is accessible. Site B however, is completely inaccessible for VPN users. All machines on subnet B, the firewall itself, etc... is not reachable by ping or otherwise. I know I am missing a NAT rule, and in 8.2, it was easy as pie to setup using ASDM, but now I can't get it for the life of me as 8.4 apparently made a lot of changes to NAT rules. I am not too comfortable in the ASA command line, but if there is a command I need to add or if you could direct me where I can add this in 8.4 ASDM I would really appreciate it. I have tired NAT Exempt, Static NAT, Static NAT Policies, etc... I think I tried all the options. I also might have my interfaces confused with the new look at feel of ASDM. Thank you much in advance and I hope I have been thorough enough.

    Read the article

  • connect server to server on secondary NIC

    - by microchasm
    Hi, I have a CentOS box with multiple NIC's running Apache. I also have another box running RHEL that will be the MySQL server. I'm trying to use the secondary NIC on the Apache box to connect directly to the MySQL server, but so far no luck. I want to isolate the MySQL box as much as possible which is why I'm going for a direct connection as opposed to running through a switch. I have a crossover cable running between them. IP configs: Apache box eth0 [to lan] ip addr: 192.168.200.100 netmask: 255.255.0.0 gateway: 192.168.111.1 eth1 [to mysql] ip addr: 192.168.200.101 netmask: 255.255.0.0 gateway: [blank] MySQL box eth0 [to apache] ip addr: 192.168.200.203 netmask: 255.255.0.0 gateway: 192.168.200.201 The rest of our network is on 192.168.111.0/24 subnet. Ping only returns Destination Host Unreachable. I've tried various variations of this setup (including straight through cable), and I can't seem to get them to talk to each other. Any help appreciated.

    Read the article

  • Linksys WAP54G v3.1 no access, power and link LED solid

    - by user142113
    I'm managing the Network of a small enterprise. A Linksys WAP54G v3.1 used to provide the WiFi network. I was called, because the device did not provide a WiFi network anymore. I first of all tried to ping the device via LAN, but there was no reaction. I've frequently reconnected the AP to the mains and always the POWER and the LINK LED keep solid, even if no network cable is connected. What I've done yet: Reset as documented: Pressed the RESET button for 10 seconds. After that I have tried to access the AP with a direct cable connection to my computer, that I've set to a static ip of 192.168.1.240, but i got no ping response on the default IP 192.168.1.245. Furthermore ipconfig reports "media disconnected". More complex reset method as described here http://bruceshankle.blogspot.de/2005/12/how-to-reset-linksys-wap54g.html as well had no effect. also tried to ping 192.168.1.1 without success Tried this method: http://www.daniweb.com/hardware-and-software/networking/threads/142437/linksys-wireless-access-point-problem#post680245 but there was no ping response when powering up. As well the tftp transfer timed out Finally tried to short pin 15 and 16 of the flash chip on the bottom side of the AP mainboard while booting to provoke a Checksum error. This should lead to the possibility to upload a firmware with tftp, as the AP stops booting and waits for a tftp connection on 192.168.1.1. But I've had no success. As well i've put pin 15 and 16 to ground while booting, also without an effect. After all that I still can't ping the AP, ipconfig still tells me "media disconnected". The POWER and LINK LED are solid. I would appreciate your answers

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >