Search Results

Search found 19525 results on 781 pages for 'say'.

Page 717/781 | < Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • High CPU usage - symptoms moving from server to server after bouncing

    - by grt3kl
    First off, I apologize if I didn't include enough information to properly troubleshoot this issue. This sort of thing isn't my specialty, so it is a learning process. If there's something I need to provide, please let me know and I'll be happy to do what I can. The images associated with my question are at the bottom of this post. We are dealing with a clustered environment of four WebLogic 9.2 Java application servers. The cluster utilizes a round-robin load algorithm. Other details include: Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04) BEA JRockit(R) (build R27.4.0-90_CR352234-91983-1.5.0_12-20071115-1605-linux-x86_64, compiled mode) Basically, I started looking at the servers' performance because our customers are seeing lots of lag at various times of the day. Our servers should easily handle the loads they are given, so it's not clear what's going on. Using HP Performance Manager, I generated some graphs that indicate that the CPU usage is completely out of whack. It seems that, at any given point, one or more of the servers has a CPU utilization of over 50%. I know this isn't particularly high, but I would say it is a red flag based on the CPU utilization of the other servers in the WebLogic cluster. Interesting things to note: The high CPU utilization was occurring only on server02 for several weeks. The server crashed (extremely rare; we are not sure if it's related to this) and upon starting it back up, the CPU utilization was normal on all 4 servers. We restarted all 4 managed servers and the application server (on server01) yesterday, on 2/28. As you can see, server03 and server04 picked up the behavior that was seen on server02 before. The CPU utilization is a Java process owned by the application user (appown). The number of transactions is consistent across all servers. It doesn't seem like any one server is actually handling more than another. If anyone has any ideas or can at least point me in the right direction, that would be great. Again, please let me know if there is any additional information I should post. Thanks!

    Read the article

  • Too many Tunnel Adapter Interfaces

    - by Tomas Lycken
    If I open a command prompt on my machine and type ipconfig /all, I see lots of Tunnel adapter Local Area Connection* 9: Media state . . . . . . . . . . . . . : Media disconnected Connection-specific DNS Sufficx . . . : Description . . . . . . . . . . . . . : Microsoft 6to4 Adapter #5 Physical address. . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . . . : No Autoconfiguration Enabled . . . . . . : Yes In fact, they're so many that my "real" adapters are pushed out of the stack, and can't be seen anymore. Is there any flag I can use on ipconfig to hide all virtual interfaces? Or is there some other way around this problem? Since they always say "Media disconnected" I suppose disabling could be an option, but if possible I'd rather not turn any functionality off. I just want to control what output I get from ipconfig. Also, I know these are related to IPv6 stuff. However, most of what I find on google merely states what these are, and that they're harmless - nothing about hiding/removing them.

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link inside of directory with the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • A proper way to create non-interactive accounts?

    - by AndreyT
    In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen. In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it. The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators". For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu. Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A". Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?

    Read the article

  • Exchange-Server Query

    - by Rudi Kershaw
    First, a little background. I've recently been taken on as a web and software developer for a small company, who has no other in-house IT support. They've been asking my opinion on lots of IT subjects that are quite far out of my comfort zone. I'm definitely not a network admin. Their IT consultancy contractor is pushing them to upgrade their dedicated exchange server, even though it seems like the one they currently have has a lot of life left in it and is running problem free. They say it's "coming to the natural end of it's life". They want to install a monster with a Xeon E5-2420, 32GB RAM, 2x 1TB HDDs, Windows Server 2012 and Microsoft Exchange 2010. They want to charge a small fortune for it. Basically, this system seems massively over the top seeing as it won't be doing anything else other than running as an exchange server for a company with less than 25 email accounts. My employers also have a file server system in-house that hosts three web apps, an SQL server, their local domain, print server and shared folders. That machine is using the same specs as the proposed new one, and it is barely using any of it's potential. I asked if Microsoft Exchange 2010 could be installed on their file server, but they said that MS Exchange can't run on the same system as an SQL server because for some reason they will eat up each others resources (even though the SQL server isn't touching 1% of the current system's CPU or RAM). My question is really, are they trying to rip my employers off? Could MS Exchange be installed on their other server (on a virtual instance or not), or does the old one even need replacing at all? Going with their current suggestion will cost the company in excess of £6k, and it seems entirely unnecessary. I apologies, because I know this is probably a little thin on details, but if I carry on I could end up writing a massive essay that no-one will want to read. I've been doing my research, but I'm not knowledgeable enough make any hard decisions. Let me know if you need any more details. Thank you for any help you can offer. Further Details: The new exchange would need to support Outlook Web App, 25 users, a few public mailboxes, and email exchange with Blackberries.

    Read the article

  • SSH Connection Refused - Debug using Recovery Console

    - by olrehm
    Hey everyone, I have found a ton of questions answered about debugging why one cannot connect via SSH, but they all seem to require that you can still access the system - or say that without that nothing can be done. In my case, I cannot access the system directly, but I do have access to the filesystem using a recovery console. So this is the situation: My provider made some kernel update today and in the process also rebooted my server. For some reason, I cannot connect via SSH anymore, but instead get a ssh: connect to host mydomain.de port 22: Connection refused I do not know whether sshd is just not running, or whether something (e.g. iptables) blocks my ssh connection attempts. I looked at the logfiles, none of the files in /var/log contain any mentioning on ssh, and /var/log/auth.log is empty. Before the kernel update, I could log in just fine and used certificates so that I would not need a password everytime I connect from my local machine. What I tried so far: I looked in /etc/rc*.d/ for a link to the /etc/init.d/ssh script and found none. So I am expecting that sshd is not started properly on boot. Since I cannot run any programs in my system, I cannot use update-rc to change this. I tried to make a link manually using ln -s /etc/init.d/ssh /etc/rc6.d/K09sshd and restarted the server - this did not fix the problem. I do not know wether it is at all possible to do it like this and whether it is correct to create it in rc6.d and whether the K09 is correct. I just copied that from apache. I also tried to change my /etc/iptables.rules file to allow everything: # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *mangle :PREROUTING ACCEPT [7468813:1758703692] :INPUT ACCEPT [7468810:1758703548] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] :POSTROUTING ACCEPT [7935933:3682829570] COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *filter :INPUT ACCEPT [7339662:1665166559] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j ACCEPT -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *nat :PREROUTING ACCEPT [101662:5379853] :POSTROUTING ACCEPT [393275:25394346] :OUTPUT ACCEPT [393273:25394250] COMMIT # Completed on Thu Dec 10 18:05:32 2009 I am not sure this is done correctly or has any effect at all. I also did not find any mentioning of iptables in any file in /var/log. So what else can I do? Thank you for your help.

    Read the article

  • MS DPM 2007: Testing the Recovery for a Production Domain

    - by NewToDPM
    Hi everybody! MS DPM 2007 is a new technology in my company, and so am I to the product. We have a classic Microsoft domain with two DCs, Exchange 2007 and a couple Web/MS SQL servers. I have deployed DPM one month ago on the domain, and after fixing the various issues I got with the replicas inconsistence and adapting the schedule and retention range to the server storage pool size, I can say the backup system is working correctly (no errors) as of today. However, there is one problem: we did not attempt to restore from the backups yet, which is a big no-no of course. I'm not sure about the way I should handle this, my main concern being Exchange and the System State of the DCs. From my understanding, DPM can only protect AND restore data on a server which is part of the same domain as the backup server. If I restore the System State (containing Active Directory) and the Exchange Storage Groups on a testing server, I am afraid it would completely disturb the domain functioning (for example, having two primary DCs on the domain). I am thinking about building a second DPM server on a testing separate domain which would mirror the replicas and then restore it on testing servers from this new domain. Is it the right way to handle the data recovery testing? How did you do on your domain when you first deployed DPM? I'd be grateful for any link/documentation or advice. Thank you in advance for your help! EDIT: Two options seem possible so far: i. Create another DC/Exchange server in the alternate location; ii. Create a separate domain in the alternate location and setup a trust between this domain and the production one. The option i is certainly the best but implies setting up a secondary Exchange server, with a dedicated public IP address so that if Exchange #1 dies, we can still send emails with Exchange #2. I don't know how complex this can be and would need to discuss it with my colleagues. The option ii would only fit the testing purposes. My only question regarding this is: if my production and DPM servers are part of domain A, and there is a trust between domains A and B, can I restore a domain A content to any domain B server?

    Read the article

  • How to make Windows 7 use the internet connection that I specify

    - by user138957
    I have a LAN adapter and a USB wireless internet connection. When both connected windows 7 always uses the USB. I tried changing the metric values but no luck. Let me explains the steps I took. Currently automatic metric on all adapters. LAN connected. ipconfig shows that it is connected to the correct ip/dns/gateway etc. IPv4 Route table shows Metric 24 Then connected USB. ipconfig shows USB connectivity then LAN in that order. Internet is now through USB. IPv4 Route table shows Metric 4249 for LAN and USB is 41. Gateway for USB shows "on-link". netstat -rn shows USBDEVICE on top. Changed LAN metric to 5 and now the route table shows LAN as 9 (not sure why it added 4) and USB as 41. netstat shows LAN then USB. ipconfig shows LAN then USB. But still connection is through USB. How do I know? Task manager shows utilization only through USB as well as speed is showing around 1mbps rather than LANs 10mbps. How can I get win7 use LAN while USB is connected. I am just trying to use USB as a backup just in case I lose LAN connection. Please help!! I thought i will make USB metric manually to say 10. But it says I have to reconnect for it to be effective. Currently USB still shows below LAN and still has 9 and 41 in the table. Disconnected USB. Table shows LAN metric as 24 (Not sure why it got changed from 9 and setting got reverted by to automatic) Reconnected USB. Now in the setting still shows 10 and the route table shows 11 for USB and LAN shows 4249 (settings shows 4245, 4 less)) For some reason restarting USB is resetting LAN setting when reconnected. Thanks

    Read the article

  • Is this iptables NAT exploitable from the external side?

    - by Karma Fusebox
    Could you please have a short look on this simple iptables/NAT-Setup, I believe it has a fairly serious security issue (due to being too simple). On this network there is one internet-connected machine (running Debian Squeeze/2.6.32-5 with iptables 1.4.8) acting as NAT/Gateway for the handful of clients in 192.168/24. The machine has two NICs: eth0: internet-faced eth1: LAN-faced, 192.168.0.1, the default GW for 192.168/24 Routing table is two-NICs-default without manual changes: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 (externalNet) 0.0.0.0 255.255.252.0 U 0 0 0 eth0 0.0.0.0 (externalGW) 0.0.0.0 UG 0 0 0 eth0 The NAT is then enabled only and merely by these actions, there are no more iptables rules: echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # (all iptables policies are ACCEPT) This does the job, but I miss several things here which I believe could be a security issue: there is no restriction about allowed source interfaces or source networks at all there is no firewalling part such as: (set policies to DROP) /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT And thus, the questions of my sleepless nights are: Is this NAT-service available to anyone in the world who sets this machine as his default gateway? I'd say yes it is, because there is nothing indicating that an incoming external connection (via eth0) should be handled any different than an incoming internal connection (via eth1) as long as the output-interface is eth0 - and routing-wise that holds true for both external und internal clients that want to access the internet. So if I am right, anyone could use this machine as open proxy by having his packets NATted here. So please tell me if that's right or why it is not. As a "hotfix" I have added a "-s 192.168.0.0/24" option to the NAT-starting command. I would like to know if not using this option was indeed a security issue or just irrelevant thanks to some mechanism I am not aware of. As the policies are all ACCEPT, there is currently no restriction on forwarding eth1 to eth0 (internal to external). But what are the effective implications of currently NOT having the restriction that only RELATED and ESTABLISHED states are forwarded from eth0 to eth1 (external to internal)? In other words, should I rather change the policies to DROP and apply the two "firewalling" rules I mentioned above or is the lack of them not affecting security? Thanks for clarification!

    Read the article

  • Be your own cloud [closed]

    - by Jedi
    I have reasonably many electronic gadgets that can go LAN or WI-FI. But how do you share and/or syncronize all your files among them? Well, between my laptops and my desktop I use Dropbox. A nice way to share files among computers. But what if your HDD on your laptop is not large enough to carry music, pictures and films. Normally you would buy an extern USB HDD and store them there, but then you cannot reach the files from other computers which are not connected to the USB device. Many would say I should use a solution like a cloud with a disc station or something like that. But my needs are follows: A mass storage which can be reached among devices (laptops, desktops, iPhone, Android phone, XBox or Playstation). Has low power requirements and is silent. Can be reached inside home and it would be nice if it could be reached outside home as well. Cheap I have looked around and I have found an wireless router which can share a USB device: D-Link Wireless N HD Media Router. I thought it would be an interesting solution for a simple local cloud solution. D-Link uses a little program called SharePort Plus which mount the USB device to your computer. Unfortunately is the transfer rate to the USB storage device rather disappointing. The transfer rate was 5.8 Mbps even though the distance between the laptop and the router was 2 meter. The same is happening when I use cable from the computer to the router. Another thing is that SharePort Plus only allows one computer be connected to the device at a time. The last thing was something I could live with. I have search on the Internet for other solutions and found this video from Synology. I'm not sure if their solution is the right one. I think a disc station connected to my home LAN could the right solution. What have you done in your home to store and share files among your computers and game consoles?

    Read the article

  • Setting up Virtual Hosts with Apache on Windows 2008 server for multiple sites. Complicated setup, including subversion

    - by Roeland
    I am setting up apache on my windows 2008 server at my home. It will serve 2 functions. Subversion hosting to allow me and some others to manage company documents with version control Local website hosting for web development. Will need to run several websites since I generally work on more then one site at a time. Heres what I have done so far. I set up subversion and apache 2.2 using some walk troughs. I changed the default port to 1337. (im a nerd) Using dyndns.com I created a domain to forward to my home ip which is dynamic. ( company.gotdns.org) I then went into my DNS for my company.com and added a record to point repo.company.com to company.gotdns.org At this point people who need access to my file repository can access by going to repo.company.com/repo which is good so far. My question comes on the next step, setting up virtual hosts with apache. Ideally I would like to have my local website be viewable by some others in the company from their homes. So, say I am working on site1, I would like to have them be able to view this by going site1.roeland.bythepixel.com. At the same time, I would like to have site10.wouter.bythepixel.com go to his local setup for site10. What I have done for this: I went into my DNS for company.com and added a record to point roeland.company.com to company.gotdns.org (which translates to my ip). I added code to my httpd-vhosts.conf (listed at bottom) I added code to my host file (listed at bottom) Hah, so of course this doenst work as excepted.. going to site1.roeland.bythepixel.com doesnt bring up my test1 site. Could anyone point me where I may be going wrong? Thanks! hosts: 127.0.0.1 localhost 127.0.0.1 sensenich.roeland.bythepixel.com ::1 localhost httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "F:/Current Projects/sensenich.com" ServerName sensenich.roeland.bythepixel.com ErrorLog "logs/sensenich.roeland.bythepixel.com-error.log" CustomLog "logs/sensenich.roeland.bythepixel.com-access.log" common </VirtualHost>

    Read the article

  • PHP `virtual()` with Apache MultiViews not working after upgrade to Ubuntu 12.04

    - by Izzy
    I use PHP's virtual() directive quite a lot on one of my sites, including central elements. This worked fine for the last ~10 years -- but after upgrading (or rather moving, as it is on a new machine) to Ubuntu 12.04 it somehow got broken. Example setup (simplified) To make it easier to understand, I simplify some things (contents). So say I need a HTML fragment like <P>For further instructions, please look <A HREF='foobar'>here</P> in multiple pages. 10 years ago, I used SSI for that, so it is put into a file in a central place -- so if e.g. the targeted URL changes, I only need to update it in one place. To serve multiple languages, I have Apache's MultiViews enabled -- and at $DOCUMENT_ROOT/central/ there are the files: foobar.html (English variant, and the default) foobar.html.de (German variant). Now in the PHP code, I simply placed: <? virtual("/central/foobar"); ?> and let Apache take care to deliver the correct language variant. The problem As said, this worked fine for about 10 years: German visitors got the German variant, all others the English (depending on their preferred language). But after upgrading to Ubuntu 12.04, it no longer worked: Either nothing was delivered from the virtual() command, or (in connection with framesets) it even ended up in binary gibberish. Trying to figure out what happens, I played with a lot of things. I first thought MultiViews was (somehow) not available anymore -- but calling http://<server>/central/foobar showed the right variant, depending on the configured language preferences. This also proved there was nothing wrong with file permissions. The error.log gave no clues either (no error message thrown). Finally, just as a "last ressort", I changed the PHP command to <? virtual("central/foobar.html"); ?> -- and that very same file was in fact included. So PHP's virtual() function basically worked -- but the language dependend stuff obviously did no longer work together with it as it did before. Of course I tried to find some change (most likely in PHP's virtual() command), using Google a lot, and also searching the questions here -- unfortunately to no avail. Finally: The question Putting "design questions" aside (surely today I would design things differently -- but at least currently I miss the time to change that for a quite huge amount of pages): What can be done to make it work again? I surely missed something -- but I cannot figure out what...

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • SIGINT and SIGTSTP ignored by most common applications

    - by Vašek Potocek
    After the last upgrade to my Fedora, a strange behaviour started occurring in X terminal applications. I can't seem to stop any process using Ctrl+C, it just results in printing ^C to the console. Similarly, Ctrl+Z prints ^Z and the process goes on. Both work well in non-graphical virtual consoles. I checked stty -a and it seems perfectly normal: speed 38400 baud; rows 24; columns 80; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?; swtch = M-^?; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 hupcl -cstopb cread -clocal -crtscts -ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc ixany imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke This is independent of the terminal (gnome-terminal, XFCE4 terminal, xterm). I later noticed that it may not be caused by the terminal at all: INT or TSTP sent directly to the respective process are ignored, too. This comprises various applications I used to terminate using Ctrl+C on a regular basis (and which often don't have any better means of exiting): cat, find, tail -f, java, ping, mplayer when stuck on a broken file... Even bash ignores Ctrl+C when I want to break a command line I have been entering and then changed my mind (no ^C is printed in this case). I need to delete it character by character (of which there may be hundreds if filename completion has been used) or intentionally run the unwanted command. Strangely enough, vim does recognize Ctrl+C—just to say its "use :quit", of course. This is extremely annoying and prevents me from working efficiently. Everything had been working until lately, maybe a week ago or so. I can not find any possible causes in Google, perhaps I'm trying wrong search terms or misidentifying the main problem. What could be it and how could I revert the standard behaviour, please? Update Ctrl+Z works sometimes. It seems that in the very first terminal I launch after logging in it stops the running command but stops working after that.

    Read the article

  • Why does pinging a local router return "Destination Host Unreachable"?

    - by Matt H
    I have two tomato routers. One is bridged wirelessly with the other. I have a new server on the network. It's running Ubuntu Server 11.04. They are all connected like this: A - Linux PC B - New Server C - Mac Mini D - Macbook T1 - Tomato 1 T2 - Tomato 2 They are connected like so: A -----+-T1 ==== wireless bridge ==== T2----- ADSL modem | | C & D Connected wirelessly to T2 B -----+ A, C & D do not experience any issues. I have an active SSH session to B from A and it's not experiencing any loss. B, the new server occasionally cannot ping T2 and therefore cannot connect to the internet. However, A can always contact B and B can ping A and B When the network is lost, B can still ping T1, but not T2 yet at the same as B has lost connection to T2, A can still ping T2. Any ideas on what this could be? there is nothing that gives any clues in any of the logs on either router or the linux server. One thing that is interesting is that I set up a ping running between B and T2. T2 has the IP address 192.68.1.1 Here is what I am seeing: From 192.168.1.1 icmp_seq=26 Destination Host Unreachable From 192.168.1.1 icmp_seq=27 Destination Host Unreachable From 192.168.1.1 icmp_seq=28 Destination Host Unreachable From 192.168.1.1 icmp_seq=29 Destination Host Unreachable From 192.168.1.1 icmp_seq=30 Destination Host Unreachable From 192.168.1.1 icmp_seq=31 Destination Host Unreachable From 192.168.1.1 icmp_seq=33 Destination Host Unreachable From 192.168.1.1 icmp_seq=34 Destination Host Unreachable From 192.168.1.1 icmp_seq=35 Destination Host Unreachable 64 bytes from 192.168.1.1: icmp_req=36 ttl=63 time=3.40 ms 64 bytes from 192.168.1.1: icmp_req=37 ttl=63 time=5.70 ms 64 bytes from 192.168.1.1: icmp_req=38 ttl=63 time=2.25 ms 64 bytes from 192.168.1.1: icmp_req=39 ttl=63 time=2.18 ms 64 bytes from 192.168.1.1: icmp_req=40 ttl=63 time=3.12 ms 64 bytes from 192.168.1.1: icmp_req=41 ttl=63 time=2.15 ms 64 bytes from 192.168.1.1: icmp_req=42 ttl=63 time=1.97 ms 64 bytes from 192.168.1.1: icmp_req=43 ttl=63 time= And it cycles to being reachable and not. So I guess you could say the question is, why is the router responding that it cannot be reached?

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • Holding off Windows 2000/3 Server in Shutdown

    - by user1668993
    We have a C# VS2010 application running on a Windows 2000 Server box (there is also a Windows 2003 Server box) as pretty much the only application running. We remove power from the box. There is a short duration battery (maybe 3 minutes of power) which then waits 10 seconds and then decides things are coming down and notifies Windows that it needs to shut down. Windows sends a CTRL_SHUTDOWN_EVENT event to the application which fields it and tries to keep Windows from going down for a while to let another computer which communicates with this one time to do some file work on the first computer. It does this by a timing loop and after the loop is over, it exits gracefully and the computer shuts down. Nice plan but it doesn't work. The application gets to maybe 20 seconds and the application is forcibly killed by Windows and Windows shuts down. At 90 seconds, the hardware firmware running the battery turns off power to the computer. I have tried searching to find out how to hold off Windows for a bit of time. I tried creating (it wasn't there) the HKEY_LOCAL_MACHINE subtree: \SYSTEM\CurrentControlSet\Control\WaitToKillAppTimeout registry key to 60000 but though it seemed to keep the popup from happening, Windows itself died at about the same amount of time -- we think without having the opportunity to shut itself down gracefully. Maybe the registry key worked but wasn't enough. Basically I have an "ill-mannered" application which is refusing to shut down (for the best of reasons) and without the registry key thing, Windows eventually shuts it down anyway and then shuts itself down. With the registry change, we think what is happening is that Windows doesn't shut down the application but Windows itself is killed suddenly without shutting down but power is still not pulled for about another minute, and then power is pulled. So maybe we have layers here. First there is how long the application tries to stay open. Then there is how long Windows is prepared to allow it to stay open. Then there is ... something... which kills windows. Then there is the power loss. Anyone have any ideas how we can get windows to stay open and in operation say to 70 seconds instead of about 20? Is our registry key right, but not enough? Is there some additional key we need to set to determine how long after windows is notified of a shutdown before it just kills itself? Thanks in advance.

    Read the article

  • Windows 7 reboot and freezing, possible power problems?

    - by mikelbring
    My Gateway LX Series desktop is about 6-8 months old. When I bought it, it had Windows Vista. I then put the RC version of Windows 7 on it. About 3 months after I bought it, it would randomly start to reboot, actually just shut off. I monitored the temperature levels and they seemed normal. So I installed a fresh Windows 7 Ultimate OEM 64bit. It actually got worse and would reboot more frequently. I then contacted Gateway and they said my machine was built for Windows Vista (made me chuckle), and told me to update my BIOS. So I did, and it was fixed for a good couple months. Recently, it started to do it again. Now I noticed early on it was doing it most often, if not every time when I was either watching a flash video or playing a flash game. So I decided to download the drivers again and I also downloaded my motherboard drivers. Seemed to be okay. A week later it started doing it again. And now it's doing it even more frequently. Sometimes I would turn it on, login into Windows and *BAM!* it would shut off. Now I am at the point where I can hardly get it to turn on. It would freeze at the point where it says "Starting Windows", with the Windows logo. Sometimes it would say "Checking disk for consistency" or whatever and freeze there (not shut off, just freeze). I even got the prompt to launch startup repair. But that also freezes when it says starting Windows. It does not really freeze, just never loads up. I am kind of lost as to what's going on. I have a few ideas but nothing I want to pursue (graphics card? hard drive?). Another thing I did try was to boot into a live disk of Ubuntu and try to launch every program I could and get on the internet but I never got it to reboot. So it sounds like to me it's a Windows thing, but I have no idea. I am just stuck and would like to see if any one has any ideas or could lead me in the right direction.

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • Mouse Error Code 24. Windows 7

    - by Cj.
    I've had the same mouse for a while, and it's been working fine until one day, it started giving me a message about a device not working properly. I tried updating the drivers, and re-installing, I even deleted old drivers in case my computer should be a little confused. It never made a difference, and my mouse seemed to be working just fine despite getting the permanent error in my device manager, I looked it up several times online, but I never found anything I could actually use, when I go to official websites, I always get the same response "plug in so so into a different place - drivers - install silverlight before you can watch this tutorial, try it on a different machine". so I gave up on that. But now is where I have a real problem, lately, my little strange error evolved into a fullblown Error 24, and my mouse is starting to turn on and off randomely, especially when it is being used, but I do hear it go "badum..dadum" when I'm off doing something else. when I looked up error code 24, I really didn't find much other than it meaning: Code 24 This device is not present, is not working properly, or does not have all its drivers installed. (Code 24) Cause The device is installed incorrectly. The problem could be a hardware failure, or a new driver might be needed. Devices stay in this state if they have been prepared for removal. After you remove the device, this error disappears. But, I have tried uninstalling the device entirely several times, and it'll go right back to its previous state with error 24, and turning on and off randomely. what do I do? I cannot afford taking it to a repair place, I can't really afford a new mouse either, I refuse to buy cheap ones as I am a gamer, in need of more than 3 buttons, and a good grip is important. Could there possibly be some confusion in the registry? I do remember having gotten some early problems after I converted my vista to windows7. But I hardly dare going in there unless I'm 100% certain of what I'm going for, and I can honestly say I am at a loss here. Edit: it is a USB mouse we're talking about here. MX™518 Optical Gaming Mouse (logitech) Edit2: I am seeing no rupture, so it must be on the inside of my mouse, or inside the rubber, protecting the cable, that would be really inconvenient to search for

    Read the article

  • HP Proliant G7 hardware RAID configuration automation with ribcl

    - by karthik
    I have been trying to automate hardware RAID configuration of HP proliant machines before OS installation (So I can not use hpacucli) ssh into iLO3 doesn't have option for RAID configuration I use ribcl but there is no command for RAID config, however I see this under the command GET_EMBEDDED_HEALTH. <STORAGE> <CONTROLLER> <LABEL VALUE="Controller on System Board"/> <STATUS VALUE="OK"/> <CONTROLLER_STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="50014380215F0070"/> <MODEL VALUE="HP Smart Array P420i Controller"/> <FW_VERSION VALUE="3.41"/> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 1I Box 1"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="04"/> </DRIVE_ENCLOSURE> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 2I Box 0"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="01"/> </DRIVE_ENCLOSURE> <LOGICAL_DRIVE> <LABEL VALUE="01"/> <STATUS VALUE="OK"/> <CAPACITY VALUE="68 GB"/> <FAULT_TOLERANCE VALUE="RAID 0"/> <PHYSICAL_DRIVE> <LABEL VALUE="Port 1I Box 1 Bay 3"/> <STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="6TA0N3SZ0000B231CYDT"/> <MODEL VALUE="EH0072FAWJA"/> <CAPACITY VALUE="68 GB"/> <LOCATION VALUE="Port 1I Box 1 Bay 3"/> <FW_VERSION VALUE="HPDH"/> <DRIVE_CONFIGURATION VALUE="Configured"/> </PHYSICAL_DRIVE> </LOGICAL_DRIVE> </CONTROLLER> </STORAGE> My question is, is there a way I modify/create this xml piece (say I have 2 Logical drive with one spare) and reboot the server it takes effect ? If this approach is not correct are there any other ways to automate hardware raid config ?

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • nginx rewrite for wikkawiki

    - by Hans
    Just setup WikkaWiki on my server, I have been trying to have the links go from wiki.mysite.info/wikka.php?wakka=Start into wiki.mysite.info/DotMG. I tried following their guide at http://docs.wikkawiki.org/ModRewrite, however it seems incomplete and outdated. Furthermore, as of version 1.3.2 base_url isn't even manually configurable from the wikka.config.php file. I am using version 1.3.2 of WikkaWiki. My nginx virtual hosts file contains: server { listen 80; server_name wiki.mysite.info; root /usr/share/nginx/wikka/; access_log /usr/share/nginx/.access/wikka; error_log /usr/share/nginx/.error/wikka error; location / { index index.php; try_files $uri $uri/ @wikka; } location @wikka { rewrite ^(.*/[^\./]*[^/])$ $1/ last; rewrite ^(.*)$ /wikka.php?wakka=$1 last; } location ~* \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Thus far it works, I can go to wiki.mysite.info/APage and it'll display that page, however it doesn't work on all pages, sometime the browser simply downloads the page (For some reason it always downloads the Start page). Also when I go to wiki.mysite.info/ it downloads the wikka.php file... Furthermore, the links on the wiki have the wikka.php?wakka= so whenever I navigate around the wiki, it goes back to being wiki.mysite.info/wikka.php?wakka=APage. I think something is wrong with my rewrite but I can't say for sure. Contents of the fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

< Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >