Search Results

Search found 19539 results on 782 pages for 'pretty print'.

Page 578/782 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • Sending eMails in a external subnet in vmware ESXi

    - by user80658
    This might be a bit hard for me to explain - and it is a pretty individual situation. I got a native server at Hetzner (www.hetzner.de). The public IP is 88.[...].12. I got ESXi running on this server. I can access the esxi console by the public ip, but none of the virtual machines. That's why I bought a public subnet with 8 (6 usable) IPs (46.[...]) and an additional public ip (88.[...].26). This additional public ip belongs to the first virtual maschine - a firewall appliance - which is connected to the WAN. This need to be done this way - since it is the official way by hetzner. My 46. subnet is behind the firewall. I got a virtualmin server with dovecot imap/pop3 server. When sending a email, most provider (gmail) will accept those mails, but a lot will put it into spam (aol). My theory is: The MX line of my domain says of course the ip of the virtual machine (46.[...]), but in the raw email it says that email is sent by the ip of the firewall (88.[...].26), which doesnt sound trustworthy. A solution would be if the firewall could handle mail, but it simply cant. How can I prevent this problem? Thanks.

    Read the article

  • "Dictionary problem." Error with VMPlayer

    - by George Mauer
    I'm pretty new to using vmware virtualization (been a virtualbox user) so I'm hoping you guys can help me out. I recently got an external usb disk containing a vm for a client, downloaded vmplayer, set it up with "Open a Virtual Machine", ran it, easy as pie. After working with it a bit this morning, I shut the VM down and now trying to start it back up again I get this: I tried removing the vm from my library, now it happens whenever I try to add it back in. In the meantime, I can still access other virtual machines so it seems like the problem might be with the virtual disk. So two questions: This is obviously not a very helpful error message. Where can I go to get more information? My Application EventLog doesn't contain anything from VMWare. What steps can I take to fix the problem? Edit: A couple more pieces of information. I did not take any snapshots. I don't think VM Player even has that ability. I have a zip file of (what I assume) is the state of the VM when it was sent to me. I cannot unzip it as it is huge and simply requires more HD space than I have available but I did extract the vmx file and examine it. Other than the UUIDs and the fact that mine reads cleanShutdown = "FALSE" they are identical. The log contains the following lines Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead: Unable to load dict from 'E:....\MachineName.vmsd'. Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead failed for file 'E:....\MachineName.vmx': Dictionary problem (6) Jun 23 10:11:18.082: vmx| SNAPSHOT: Snapshot_TimeStampTiers failed: Dictionary problem (6)

    Read the article

  • Is it possible to do a 301 redirect AND redirect to the requested resource?

    - by Pure.Krome
    For one of our projects, we're doing a rebranding of the website name, logo, etc... As such, we need to 301 Moved Permenantly redirect all users from the old domain to the new domain. With IIS7, that's pretty simple. We just create a new website that redirects all traffic to a host-headered domain .. to the new one. But this loses their original destination resource. eg. Old Domain: www.OldDomain.com New Domain: www.NewDomain.com User: www.OldDomain.com/user/PureKrome -> 301 --> www.newDomain.com Notice how it's going to the new domain BUT not to /user/PureKrome? How can I do this so it goes to the new domain and keeps the original resource request? I'm guessing URL-ReWriter for IIS7 might help? Also, what happens if I want to do this... CurrentDomain 1: Domain.com CorrectDomain 1: www.Domain.com CurrentDomain 2: AnotherDomain.com CorrectDomain 2: www.AnotherDomain.com Is it also possible to have those in the same IIS website? So any URL to domain.com will 301 to www.domain.com Right now I'm making 2 IIS websites, with a 301 hardcoded (which still means I lose the original resource request, too). Help!

    Read the article

  • Is there an objective way to measure slowness of PC/WINDOWS?

    - by ekms
    We've a lot of users that usually complain about that his PC is "slow". (we use win XP). We usually check startup programs, virus, fragmentation, disk health and common problems that causes slowness (Symantec AV drops disk to 1mb/s , or a seagate HD firmware error in certain models), but in those cases the slowness is pretty evident. In other hand, the most common is the user complaining about his pc but for us looks OK, even in 6 years old desktops. People sometimes even complains about his new quad core desktops speed!!! So, we are asking if there's a way to OBJECTIVELY check that a computer didn't dropped its performance, compared with similar ones o previous measures, specially for work use (I don't think that 3dmark benchmark o similar may help). The only thing that I found that was useful is HDTune, but it only check hard disk performance. Basically, what we want is something that enable us to say to our users "see? your PC is as slow as was three years ago! stop complaining! Is all in your head!"

    Read the article

  • How to connect computers to a network printer behind a router?

    - by kokbira
    General question: How to connect computers to an IP printer behind a router? Particular question: How to connect C-1 and C-2 to PRI? What? Where? [ISP] | | -> IPs:200.X.X.X/other configs:DC | [R-1] | | -> IPs:10.1.X.X locked by MAC,M:255.0.0.0,G:10.1.0.1 |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | | [PRI] IP:10.1.7.7 [R-2] IP: 10.1.0.1,MAC:A | | -> IPs:192.168.1.X,M:255.255.255.0,G:192.168.1.1 |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | | [C-1] IP:192.168.1.2 [C-2] IP:192.168.1.3,MAC:A Glossary and details: ------------------------------------------------------------------------------------ - IP: IP. - IPs: Some IP range. - M: Mask. - G: Gateway. - MAC:A: A MAC address that I will not inform you :) - DC: Don't care. - ISP: Internet Service Provider (not so much details about it on that case). - R-1: A real router or some concatenated so IP range bellow that block is 10.1.X.X and above is ISP. The provided IPs are provided by MAC. As all available addresses are in use, you must clone an existing one to join with a new device (and to disconnect the cloned one). - PRI: An network printer (some people here call that IP printer). - R-2: A TP-LINK TL-WR340G, mine wireless router (since my computer does not have ethernet input, it is my ethernet-wifi adapter :), admin access, MAC address cloned from C-2 (MAC:A). I've to configure 10.0.1.1 and 10.0.1.2 as DNS addresses, other wise I cannot connect C-1 and C-2 to Internet. - C-1: My computer, a CCE XLE-425 (remember: no ethernet input), with Windows 7, admin access. - C-2: another computer with better configs than mine, MAC:A, Windows XP. Requirements: I want to print, to access Internet and to do it myself (no need to call network admin men in black people). Pay attention to MAC clones and DNS info.

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • How can I make a copy of a printer in Win7?

    - by hawbsl
    Has anyone been able to copy an existing printer in Win7? I know that we were able to do this in XP (not as a direct copy/paste, but by installing the printer twice) but in Win7 it doesn't seem to be possible. Googling the answer is hopeless because searching for "copy printer" or "duplicate printer" you get a bunch of posts about "printer copiers" or people complaining about duplicate printers getting created in the background (precisely what I'd like to be able to do) It'd be good to know how to do it in general, but if it depends on the printer type, then in our case we are trying to make a copy of an HP Laserjet. Tried installing from the CD - but the CD is too old for Win7 Tried installing via Add Printer and that seems to install the printer but it's marked with an error. Tried installing via the .exe installer from the HP site and that does result in a successful printer being installed, but it won't let you install the same printer twice (stalls on the "insert USB cable now" step - simply won't enable the greyed out "Next" button). The reason this is required is so that we can print to one to the feeder and to the tray separately.

    Read the article

  • How to speed up request/response to django using apache or another solution?

    - by jbcurtin
    Hey all, I'm mainly a developer, but every now and again I jump into the sys-admin position. For the most part I've gotten away with deploying php and python apps using apache. I write today because I'm starting to research faster alternatives to apache, yet still have some of the core features I require like put and delete methods and the ability to connect to a socket via apache. ( This I have not tried, but might be a nice whistle if I ever employ comet on my apps. ) As you've probably guessed, I use javascript exclusively for all my websites utilizing deep linking for SEO support. The main areas that I'm looking to increase performance is the connection between the django apps and the web server to the client response. Every day I work my best to keep the smallest memory foot print as possible, however I am getting to the end of my rope when it comes to working with apache. In general, keep in mind that I'm just starting this research so I'm looking more for material to read then solutions at this moment. My main questions: Am I missing something about apache that makes it faster then everything else? What would be a good server environment to deploy just static files one? What are some of the leading open-source and paid alternatives?

    Read the article

  • Mac OS X will only upload zero-byte files through FTP

    - by tabacitu
    I'm using Mac OS X Lion and i've been having this problem with FTP (any FTP client, mind you. I tried Transmit, FileZilla, Cyberduck and the Terminal, all with the same result) I can browse files in my FTP Client, but when I upload files, the client hangs for a few seconds, then thinks it uploaded the files successfully, but it only creates a new file with one blank line in it. Sometimes, it manages to upload 4-5 lines. It then returns: 226 - Error during read from data connection 226 Transfer aborted But 2xx is a success message. It is not a server issue, since any Windows machine will upload just fine using the same network. Can anybody figure out what the problem is? It renders my mac useless for web development. The problem persists with SFTP and FTP with SSL/TLS. Later edit: Solved! Ok, turns out the problem goes away when I take out my router and connect directly through PPPoE. So the problem is with the router, I thought. But no, the problem is with the mac that connects through a router that connects through a PPPoE and tries to upload using FTP. Pretty specific, I know. The problem is with the MTU (maximum transmission unit). Apparently, mac os x breaks the file into chunks that are too large for the router to send, because the router's MTU was set lower than Mac OS X's. My router's was 1492, which is ok, but my Mac's MTU was 1500, which is unacceptable. I don't even understand why it works directly with PPPoE. Anyway, if you encounter the same problem, this is how you diagnose and fix it: In terminal, run: ifconfig | grep mtu to see what the MTU is for en0 (or en1, mine was en0) If it's 1500, run sudo ifconfig en0 mtu 1300 This should solve it. If so, it may only be until the next restart. You can also change the MTU in System Preferences \ Network \ Ethernet - Advanced \ Hardware

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • IIS6 site using integrated authentication (NTLM) fails when accessed with Win7 / IE8

    - by Ciove
    Hi, I'm having pretty similar problems as described in case 139099, but the fix there doesn't seem to work for me. Here's the details: Server: Win2003Srv R2 SP2 (stadalone, not a member of a domain). IIS6, TCP/443 (https). Anonymous access disabled. Integrated Windows authentication enabled. Local useraccouts Each useraccount has own virtual folder with change access and read access to site root. The 'adsutil NTAuthenticationProviders "NTLM"' -thing set to site root and useraccount's virtual folder. Client: Win7 Enterprise Member of a AD-Domain IE8 Allows three login attepts then fails. Using [webservername][username] in the logon window (Windows security) Logon using other browsers (Chrome and Firefox) works OK. The Web services log shows one 401.2 and two 401.1 events. The Security Event log shows two events, first is Fauilure Audit (680), The second event is Fauilure Audit (529) with these details: Logon Failure: Reason: Unknown user name or bad password User Name: [username] Domain: [webservername] Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: [MyWorkstation] Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: [999.999.999.999] Source Port: 20089 Any ideas appreciated.

    Read the article

  • Windows 7 Explorer: how to show total size of all files in current folder?

    - by matt wilkie
    In Windows XP Explorer one can turn on Status Bar which shows, among other things, the total size of all the files in the current folder, or if the cumulative size of the selected files. How do I get the same at-a-glance information in Windows 7? Selecting files doesn't count as it stops after 15 files, and it's rare that I'm concerned about total size with that few files (it's pretty easy to estimate in my head). thanks. UPDATE: Information derived from the context menu (select r-click properties) isn't "at a glance", and not as smooth as selecting files and clicking the details link at the bottom in any case. Thank you for fleshing out more of the available routes though. Yes Q19232 is similar to this one, though it is not a duplicate. That question is about looking for easy free-space on disk stats and this one is easy used-space by contents of this folder stats. The answer for both is the same though. You can't! Hopefully someone will figure how to get this lost feature back with a shell extension or something.

    Read the article

  • DVI output only working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Fixed ruby/mysql connection with new libmysql.dll, and broke Apache in the process

    - by jmtoporek
    Ok so bit of background - all my development has been on a local Windows 7 machine. I had Apache with PHP/MySQL running with no issues. Been using ruby (1.9.3 and latest rails release 3.2.9) with built in webrick server, but had a devil of a time connecting to mysql. Did some research, updated my libmysql.dll file in c:/ruby/bin and it worked! Very happy... except now Apache stopped working. In my attempt to resolve the issue I found an older copy of libmysql.dll, renamed the new file, copied the old file back to c:ruby/bin and apache works, ruby does not. So I can take this ass backwards approach but obviously this seems pretty stupid. I was surprised that Apache was using the dll file in ruby/bin folder. I presume this is related to path variables perhaps? I guess I was hoping someone could direct me as to how I can use one dll file for apache and another for ruby. Or if you have some other smarter approach - I've smart enough to follow directions to install apache from scratch and enable php on windows as well as ubuntu, but I'm not much of a sys admin, just a semi competent web developer.

    Read the article

  • SSH Keys Authentication keeps asking for password

    - by Rhyuk
    Im trying to set access from ServerA(SunOS) to ServerB(Some custom Linux with Keyboard Interactive login) with SSH Keys. As a proof of concept I was able to do it between 2 virtual machines. Now in my real life scenario it isnt working. I created the keys in ServerA, copied them to ServerB, chmod'd .ssh folders to 700 on both ServerA,B. Here is the log of what I get. debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: Peer sent proposed langtags, ctos: debug1: Peer sent proposed langtags, stoc: debug1: We proposed langtags, ctos: en-US debug1: We proposed langtags, stoc: en-US debug1: SSH2_MSG_KEX_DH_GEX_REQUEST sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: dh_gen_key: priv key bits set: 125/256 debug1: bits set: 1039/2048 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXX.XXX.XXX.XXX' is known and matches the RSA host key. debug1: Found key in /XXX/.ssh/known_hosts:1 debug1: bits set: 1061/2048 debug1: ssh_rsa_verify: signature correct debug1: newkeys: mode 1 debug1: set_newkeys: setting new keys for 'out' mode debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: newkeys: mode 0 debug1: set_newkeys: setting new keys for 'in' mode debug1: SSH2_MSG_NEWKEYS received debug1: done: ssh_kex2. debug1: send SSH2_MSG_SERVICE_REQUEST debug1: got SSH2_MSG_SERVICE_ACCEPT debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /XXXX/.ssh/identity debug1: Trying public key: /xxx/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /xxx/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Password: ServerB has pretty limited actions since its a custom propietary linux. What could be happening? EDIT WITH ANSWER: Problem was that I didnt have those settings enabled in the sshd_config (Refer to accepted answer) AND that while pasting the key from ServerA to ServerB it would interpret the key as 3 separate lines. What I did was, in case you cant use ssh-copy-id like I couldnt. Paste the first line of your key in your "ServerB" authorized_keys file WITHOUT the last 2 characters, then type yourself the missing characters from line 1 and the first one from line 2, this will prevent adding a "new line" between the first and second line of the key. Repeat with the 3d line.

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • Can we do a DNSSEC 101? [closed]

    - by PAStheLoD
    Please share your opinions, FAQs, HOWTOs, best practices (or links to the one you think is the best) and your fears and thoughts about the whole migration (or should I just call it a new piece of tech?). Is DNSSEC just for DNS providers (name server operators)? What ought John Doe to do, who hosts johndoe.com at some random provider (GoDaddy, DreamHost and such)? Also, what if the provider's name server doesn't do automatic signing magic, can John do it manually? In a fire-and-forget way, without touching KSKs and ZSKs rollovers and updating and headaches?) Does it bring any change regarding CERT records? Do browsers support it? How come it became so complex? Why didn't they just merged it with SSL? DKIM is pretty straightforward, IANA/IETF could've opted for something like that. (Yes I know that creating a trust anchor would be still problematic, but browsers are already full of CA certs. So, they could've just let anyone get a cert for a domain for shiny green padlocks, or just generate one for a poor blue lock, put it into a TXT record, encrypt the other records and let the parent zone sign the whole for you with its cert.) Thanks! And for disclosure (it seemed like the customary thing to do around here), I've asked the same on the netsec subreddit.

    Read the article

  • Should I use nginx exclusively, or have it as a proxy to Tomcat (performance related)?

    - by Kevin
    I've planned to create a website that'll be pretty heavy on dynamic content, and want to know what would be the wisest choice for part of my webstack. Right now I'm trying to decide whether I should develop upon nginx, using PHP to deliver the dynamic content, or use nginx as a proxy to Tomcat and use servlets to deliver the dynamic content. I have a good amount of experience with Java, JSP, and servlets, so that's a plus right off the bat. Also, since it is a compiled language, it will execute faster than PHP (it is implied here that Java is around 37x faster than PHP) , and will create the web pages faster. I have no experience with PHP, however i'm under the impression that it is easy to pick up. It's slower than Java, but since the client will only be communicating with nginx, I'm thinking that serving the dynamically created web pages to the client will be faster this way. Considering these things, i'd like to know: Are my assumptions correct? Where does the bottleneck occur: creating pages or serving them back to the client? Will proxying Tomcat with nginx give me any of nginx performance benefits if I'm going to be using Tomcat to generate the dynamic content (keeping in mind my site is going to be heavy in this aspect)? I don't mind learning PHP if, in the end, its going to give me the best performance. I just want to know what would be the best choice from that standpoint.

    Read the article

  • DVI output _only_ working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

  • Bypass network stack. Which options do we have? Pros and cons of each option [on hold]

    - by javapowered
    I'm writing trading application. I want to bypass network stack in Linux but I don't know how this can be done. I'm looking for complete list of options with pros and cons of each of them. The only option I know - is to buy solarflare network card which supports OpenOnLoad. What other options should I consider and what is pros and cons of each of them? Well the question is pretty simple - what is the best way to bypass network stack? upd: OpenOnload It achieves performance improvements in part by performing network processing at user-level, bypassing the OS kernel entirely on the data path. Intel DDIO to allow Intel® Ethernet Controllers and adapters to talk directly with the processor cache of the Intel® Xeon® processor E5. What's key difference between these techologies? Do they do roughly the same things? I much better like Intel DDIO because it's much easy to use, but OpenOnload required a lot of installation and tuning. If good OpenOnload application is much faster than good Intel DDIO application?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Firefox: Clear History Is SUPER EFFECTIVE?

    - by acidzombie24
    I'm seeing a performance problem on certain sites (like gmail) which clearing the history should not affect. Is this a website problem or a firefox problem and what can i do to fix it w/o clearing my history? Also as a webdeveloper i am interested in how to make this happen (or not happen). I'm using firefox 8 and i confirmed the problem by copying my profile to firefox 11 (portable). To reproduce go to gmail.com and sign in. Have your task manager open. Once you click signin or hit enter gmail will bring up your emails. Keep your eye on the CPU usage. I checked and right now on this machine its using all my CPU for 22seconds!!!! Yes. 22 seconds. Once i cleared my "browser & download history" Its <6seconds. WTF. I have no idea why or how the size of history and CPU usage when loading up gmail are correlated. I have firefox setup so it never clears the history. But... 22seconds is a disaster. Can someone explain why this is happening or a fix that isnt clearing my history? I tried visiting a few websites and only gmail eats up that much CPU. Most websites only take <5sec of max CPU. So maybe this is a gmail problem? Or a firefox problem that gmail happens to hit? I still dont understand why it happens. -edit- I forgot to mention places.sqlite is 90mb. I dont think that matters. I have a sqlite file 400mb which is pretty much 2 large tables. It has no performance issues

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >