Search Results

Search found 3407 results on 137 pages for 'happy'.

Page 114/137 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • JSON element detection

    - by user3614570
    I’ve created a string… {"atts": [{"name": "wedw"}, {"type": "---"}]} I pile a bunch of these together in an array based on user input and attach them to another string to complete a JSON object that tests out as valid. So I end up with a global array called fields with a bunch of these little snippets. So how do I change the name "weds" with a new name? I’ve tried... function changefieldname(pos){ var obj = JSON.parse(jsonstring); var oldname = obj.tracelog.fields[pos].atts[0]["name"]; var newname = document.getElementById("newlogfieldname"+pos).value; fields[pos].replace(oldname, newname); //writejson(); } And a bunch of variations. I know everything is checking out correct interms of the variables pos, oldname, and newname. I also know that fields[pos] returns the string in the array I want to correct but it’s not happy. I also tried converting fields[pos] to a string, but the replace function doesn't work on it. I’m sure there is a good reason.

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • Dig returns "status: REFUSED" for external queries?

    - by Mikey
    I can't seem to work out why my DNS isn't working properly, if I run dig from the nameserver it functions correctly: # dig ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> ungl.org ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24585 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;ungl.org. IN A ;; ANSWER SECTION: ungl.org. 38400 IN A 188.165.34.72 ;; AUTHORITY SECTION: ungl.org. 38400 IN NS ns.kimsufi.com. ungl.org. 38400 IN NS r29901.ovh.net. ;; ADDITIONAL SECTION: ns.kimsufi.com. 85529 IN A 213.186.33.199 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Mar 13 01:04:06 2010 ;; MSG SIZE rcvd: 114 but when I run it from another server in the same datacenter I receive: # dig @87.98.167.208 ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> @87.98.167.208 ungl.org ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18787 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;ungl.org. IN A ;; Query time: 1 msec ;; SERVER: 87.98.167.208#53(87.98.167.208) ;; WHEN: Sat Mar 13 01:01:35 2010 ;; MSG SIZE rcvd: 26 my zone file for this domain is $ttl 38400 ungl.org. IN SOA r29901.ovh.net. mikey.aol.com. ( 201003121 10800 3600 604800 38400 ) ungl.org. IN NS r29901.ovh.net. ungl.org. IN NS ns.kimsufi.com. ungl.org. IN A 188.165.34.72 localhost. IN A 127.0.0.1 www IN A 188.165.34.72 The server is running Ubuntu 9.10 and Bind 9, if anyone can shed some light on this for me it'd make me very happy! thanks

    Read the article

  • Vista install works on one computer, but bluescreens another (on which Vista is known to work)

    - by Ken
    I hope my explanations make some sense -- please ask for clarification if they don't. I had a computer running Windows Vista (Ultimate, 64-bit). All was well! Then one day there was a nasty power surge at the office, and it died. (We didn't have surge protectors at the office, unfortunately. I assumed our lines were conditioned elsewhere, or was not an issue here. Oops.) After some testing, it was determined that the PSU, motherboard, and RAM were bad. While waiting for new hardware to arrive, I put my hard disk in a spare PC which had identical parts (mobo/CPU/RAM/PSU/video). Everything worked perfectly. The only way I could even tell it wasn't my computer is because Vista asked to re-activate itself with the new hardware, which worked fine, too. So the hard disk seems OK. Then the new parts arrived. The old motherboard model is no longer manufactured, so it's a new one with the same CPU/RAM/videocard/etc. slots. The PSU is also new, while the RAM I'm using is from the spare PC mentioned above. When I put it together and tried booting with my old hard disk, it starts to boot Windows, and then (fairly early in the process) gives a bluescreen and immediately reboots (so I can't see whatever the bluescreen is trying to tell me). I tried "safe mode", which also bluescreened. I tried booting the Vista DVD and running the repair utility, which found a Vista install, confirmed that it would not boot, and, eventually, declared that it was unable to repair it. I installed Vista fresh on a new hard disk, with the new mobo/etc., and it works perfectly. (That's what I'm running now.) I've also booted a Linux CD here, which ran great, and I've run Memtest86+ for a while, which found no errors. So all the hardware apart from the old hard disk seems OK, too. I don't think the problem is with my old Vista hard disk, since I used that with another mobo/CPU just fine. I don't think it's any other part of the new hardware, since I'm able to use it (and test it) with no trouble. It's just the combination of my old Vista install plus the new PC hardware that's not happy. I can get my data off my old hard disk and onto my new hard disk, and reinstall my apps, but it would be nice if I could fix things so I could continue to use my old hard disk as before. The latest hypothesis I've heard is that Vista had trouble with the new hardware (i.e., motherboard), but we have no idea what to do about that (except Safe Mode, which didn't work). Suggestions? Hypotheses for what's not right about this combination of Vista install and motherboard? Thanks!

    Read the article

  • ssh tunnel error "ssh_exchange_identification: Connection closed by remote host"

    - by Jacob Ewing
    I'm trying to use an ssh tunnel from my office machine to my home machine, and get an error when I try to use it. What I'm doing is starting one shell like so: ssh -gL 12345:my.home.domain:22 my.home.domain This is giving me a proper shell, no problem. What I normally do then is ssh to my home machine through this office machine, like so: ssh -p 12345 127.0.0.1 This has always worked for me, until last week, when I set up a new system on my home machine (switching from Ubuntu to Debian). Now I get an error. I can still open up my initial ssh connection, but when I try to use that tunnel, I get (on the office machine) this error: ssh_exchange_identification: Connection closed by remote host Also, when that happens, the open shell that I have the tunnelling set up through gets this line spat out at it: channel 3: open failed: connect failed: Connection timed out At which point, I'm at a loss. If any more info is needed, I'll be happy to post it. ============= further to that ============== After fiddling around further, I've found that I'm getting a different response from the server (my home machine that is) when I try to telnet in on the various ports. If I try: telnet my.home.domain 22 I get this back: Trying <my ip address>... Connected to <my domain>. Escape character is '^]'. SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2 Which is what I would expect. After setting up the tunnel though, and then telnetting to that, I see this response: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. ============== and further still ================== As per kbulgrien's suggestion, here is the output from the client machine with the -v option: ssh -vp 24600 127.0.0.1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 127.0.0.1 [127.0.0.1] port 24600. debug1: Connection established. debug1: identity file /home/jacob/.ssh/id_rsa type -1 debug1: identity file /home/jacob/.ssh/id_rsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_dsa type -1 debug1: identity file /home/jacob/.ssh/id_dsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Business Owners - What Remote Desktop Solution Do You Use To Service Your Clients PCs?

    - by Sootah
    Howdy fellow computer geeks, I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. So - Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost? Thanks in advance for your help! -Sootah

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • backup and restoration of a freeipa infrastructure

    - by Sirex
    I'm finding the documentation on ipa server backup and restoration sadly lacking, and being so centrally critical it's not something i'm really happy about shooting in the dark with - could some kind soul more knowledable in the matter please attempt to provide an idiot-proof guide to backing up and restoring of IPA server(s) ? Particularly the main server (the cert signing one). ...We're looking towards rolling out ipa in a two server setup (1 master, 1 replica). I'm using dns srv records to handle failover, hence a loss of the replica isn't a big deal as i could make a new one and force a resync to happen - it's losing the master that bothered me. The thing that i'm really struggling with is locating a step-by-step procedure for backing up and restoring the master server. I'm aware that whole-VM snapshot is the recommended way of doing IPA server backup, but that isn't an option at this time for us. I'm also aware that freeipa 3.2.0 includes some sort of backup command build in, but that isn't in the ipa version of centos, and i don't expect it will be for some time yet. I've been trying many different methods, but none of them seem to restore cleanly, amongst others, i've tried; a command similar to db2ldif.pl -D "cn=directory manager" -w - -n userroot -a /root/userroot.ldif the script from here to produce three ldif files -- one for the domain ({domain}-userroot), and two for the ipa server (ipa-ipaca and ipa-userroot): Most of the restores i've tried have been similar to the form of: ldif2db.pl -D "cn=directory manager" -w - -n userroot -i userroot.ldif which seems to work and reports no errors, but totally borks the ipa install on the machine and i can no longer login with either the admin password on the backed up server, or the one i set it to on installation before attempting the ldif2db command (i'm installing ipa-server and running ipa-server-install, then attempting the restore). I'm not overly bothered about losing the CA, having to rejoin the domain, losing replication etc etc (although it'd be awesome if that could be avoided) but in the instance of the main server dropping i'd really like to avoid having to re-enter all the user/group information. I guess in the instance of losing the main server i could promote the other one and replicate in the other direction, but i've not tried that, either. Has anyone done that ? tl;dr: Can someone provide an idiots guide to backing up and restoring an IPA server (preferably on CentOS 6) in a clear enough way that'd make me feel confident it'll actually work when the dreaded time comes ? Crayons are optional, but appreciated ;-) I can't be the only person struggling with this, seeing how widely used IPA is, surely ?

    Read the article

  • Excel on Mac Mini OS X Version 10.7.2 - Create text file named after a cell containing other cell data from other cells

    - by user143041
    I tried using the code below for the Excel program on my `Mac Mini using the OS X Version 10.7.2 and it keeps saying Error due to file name / path: (The Excel fiel I am creating is going to be a template with my formulas and macros installed which will be used over and over). Sub CreateFile() Do While Not IsEmpty(ActiveCell.Offset(0, 1)) MyFile = ActiveCell.Value & ".txt" fnum = FreeFile() Open MyFile For Output As fnum Print #fnum, ActiveCell.Offset(0, 1) & " " & ActiveCell.Offset(0, 2) Close #fnum ActiveCell.Offset(1, 0).Select Loop End Sub What Im trying to do: 1st Objective I would like to have the following data to be used to create a text file. A:A is what I need the name of the file to be. B:2 is the content I need in the text file. So, A2 - "repair-video-game-Glassboro-NJ-08028.txt" is the file name and B2 to be the content in the file. Next, A3 is the file name and B3 is the content for the file, etc. ONCE the content reads what is in cell A16 and B16 (length will vary), the file creation should stop, if not then I can delete the additional files created. This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? 2nd Objective I would like to have the following data to be used to create a text file. A:1 is what I need the name of the file to be. B:B is the content I want in the file. So, A2 - is the file name "geo-sitemap.xml" and B:B to be the content in the file (ignore the .xml file extension in the photo). ONCE the content cell reads what is in cell "B16" (length will vary), the file creation should stop, if not then I can adjust the cells that have need content (formulated content you see in the image is preset for 500 rows). This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? I can Provide the content in the cells that are filled in by excel formulas that are not not to be included in the .txt files. It is ok if it is not possible. I can delete the extra cells that are not populated (based on the data sheet). Please let me know if you need any more additional information or clarity and I will be happy to provide it. I really appreciate your help! Thank you

    Read the article

  • Output php mail calls to log file

    - by Tom McQuarrie
    This question relates to the question found here: Find the php script thats sending mails Trying to do the exact same thing but can't get the log to output what I need. Not too experienced with serverfault and ideally I'd post my followup on the original question, or PM adam to see if he ever found a solution, but looks as though server fault doesn't work that way. I can post an "answer" but that's definitely not what this is. I have a script located at /usr/local/bin/sendmail-php-logged, with the following: #!/bin/sh logger -p mail.info sendmail-php: site=${HTTP_HOST}, client=${REMOTE_ADDR}, script=${SCRIPT_NAME}, filename=${SCRIPT_FILENAME}, docroot=${DOCUMENT_ROOT}, pwd=${PWD}, uid=${UID}, user=$(whoami) /usr/sbin/sendmail -t -i $* This is logging to /var/log/maillog, but as Adam mentions in his question, none of the server variables work. Output I'm getting is: Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:03 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:05 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:11 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:14 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:29 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:41 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root User ID, current user, and pwd are all working, probably because they're globally accessible script resources, and not specific to PHP, like all the others are. I've tried using other server variables as per labradort's instructions, but no joy. Here's some sample tests: logger -p mail.info sendmail-php SCRIPT_NAME: ${SCRIPT_NAME} logger -p mail.info sendmail-php SCRIPT_FILENAME: ${SCRIPT_FILENAME} logger -p mail.info sendmail-php PATH_INFO: ${PATH_INFO} logger -p mail.info sendmail-php PHP_SELF: ${PHP_SELF} logger -p mail.info sendmail-php DOCUMENT_ROOT: ${DOCUMENT_ROOT} logger -p mail.info sendmail-php REMOTE_ADDR: ${REMOTE_ADDR} logger -p mail.info sendmail-php SCRIPT_NAME: $SCRIPT_NAME logger -p mail.info sendmail-php SCRIPT_FILENAME: $SCRIPT_FILENAME logger -p mail.info sendmail-php PATH_INFO: $PATH_INFO logger -p mail.info sendmail-php PHP_SELF: $PHP_SELF logger -p mail.info sendmail-php DOCUMENT_ROOT: $DOCUMENT_ROOT logger -p mail.info sendmail-php REMOTE_ADDR: $REMOTE_ADDR And the output: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: I'm running php 5.3.10. Unfortunately register_globals is on, for compatibility with legacy systems, but you wouldn't think that would cause the environment variables to stop working. If someone can give me some hints as to why this might not be working I'll be a very happy man :)

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Can you set up a gaming LAN using OpenVPN installed in a VMware guest OS and be playing the game on the host OS?

    - by Coder
    I would like to setup a gaming VPN. Ie. I have some games that work over LAN and would like to play them with people that are not on my LAN. I know I can do this with OpenVPN. My ultimate goal would be to run OpenVPN portably on my host OS and not even need any virtualization. As such i don't want to install it on my host, but i'm fine with running it portably. I'm even fine with temporarily adding registry keys, and then running a .reg file to remove these entries once i'm done. To this effect i have installed OpenVPN on a virtual machine and diffed the registry. I then manually (using a .reg file) added all the keys that seem important on my host OS and copied the installation folder of OpenVPN onto my host machine. Then i try to run openVPN GUI 1.0.3 as a test and it says "Error opening registy for reading (HKLM\SOFTWARE\OpenVPN). OpenVPN is probably not installed". I verified that that key is indeed in the registry with all subkeys and it looks correct. I have tried running the GUI as an administrator and in compatibility mode with no success. I am running Windows 7. If this fails then i would be happy with installing OpenVPN on a virtual machine in VMWare but they key is that i will be running the game installed on my host machine. The first question for this option is if this is even possible. The second is, that I can't get the VM to have internet access if I use bridging but i can if i use NAT. Is it possible to do this game VPN setup with VMWare guest OS running using NAT? Summary of questions: -Is it possible to run openVPN portably and if so what did i miss above? -If it's not possible to run it portably, then can setup a gaming LAN by installing OpenVPN in a guest OS with NAT and how can i do this? -If the above is not possible then can i install OpenVPN in a guest using bridging and if so how can i set this up with a Windows 7 host and Windows XP guest as currently i can't get the guest to be able to access the internet in bridging mode, but it working in NAT mode. -In general is there any good documentation on setting up a gaming LAN with OpenVPN (i am using 2.1.4) as i have never set up a VPN of any sort before so any help would be much appreciated. Thanks!

    Read the article

  • How to disable hiddev96 in linux (or tell it to ignore a specific device)

    - by Miky D
    I'm having problems with a CentOS 5.0 system when using a certain USB device. The problem is that the device advertises itself as a HID device and linux is happy to try to provide support for it: In /ver/log/messages I see a line that reads: hiddev96: USB HID 1.11 Device [KXX USB PRO] on usb-0000:00:1d.0-1 My question comes down to: Is there a way to tell linux to not use hiddev96 for that device in particular? If yes, how? If not, what are my options - can I turn hiddev96 off completely? UPDATE I should probably have been a bit more specific about what is going on. The machine is running Centos 5.0, and on top of it I'm running VMWare workstation with Windows XP - which is where the USB device is actually supposed to operate. All works fine for other USB devices (i.e. VMWare successfully connects the USB device to the guest OS and the OS can use it, but for this particular device VMWare connects it to the guest OS, but the OS can't read/write to it) Every attempt locks up the application that is trying to communicate with the device. I've reason to believe that it is because the device is a HID device and there's some contention between the Linux host and the Windows guest OS in accessing the device. Below is the output from modprobe -l|grep -i hid as requested by @Karolis: # modprobe -l | grep -i hid /lib/modules/2.6.18-53.1.14.el5/kernel/net/bluetooth/hidp/hidp.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetservo.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetkit.ko And here is the output of lsmod # lsmod Module Size Used by udf 76997 1 vboxdrv 65696 0 autofs4 24517 2 hidp 23105 2 rfcomm 42457 0 l2cap 29633 10 hidp,rfcomm tun 14657 0 vmnet 49980 16 vmblock 20512 3 vmmon 945236 0 sunrpc 144253 1 cpufreq_ondemand 10573 1 video 19269 0 sbs 18533 0 backlight 10049 0 i2c_ec 9025 1 sbs button 10705 0 battery 13637 0 asus_acpi 19289 0 ac 9157 0 ipv6 251393 27 lp 15849 0 snd_hda_intel 24025 2 snd_hda_codec 202689 1 snd_hda_intel snd_seq_dummy 7877 0 snd_seq_oss 32577 0 nvidia 7824032 31 snd_seq_midi_event 11073 1 snd_seq_oss snd_seq 49713 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi_event snd_seq_device 11725 3 snd_seq_dummy,snd_seq_oss,snd_seq snd_pcm_oss 42945 0 snd_mixer_oss 19009 1 snd_pcm_oss snd_pcm 72133 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss joydev 13313 0 sg 36061 0 parport_pc 29157 1 snd_timer 24645 2 snd_seq,snd_pcm snd 52421 13 snd_hda_intel,snd_hda_codec,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer ndiswrapper 170384 0 parport 37513 2 lp,parport_pc hci_usb 20317 2 ide_cd 40033 1 tg3 104389 0 i2c_i801 11469 0 bluetooth 53925 8 hidp,rfcomm,l2cap,hci_usb soundcore 11553 1 snd cdrom 36705 1 ide_cd serio_raw 10693 0 snd_page_alloc 14281 2 snd_hda_intel,snd_pcm i2c_core 23745 3 i2c_ec,nvidia,i2c_i801 pcspkr 7105 0 dm_snapshot 20709 0 dm_zero 6209 0 dm_mirror 28741 0 dm_mod 58201 8 dm_snapshot,dm_zero,dm_mirror ahci 23621 4 libata 115833 1 ahci sd_mod 24897 5 scsi_mod 132685 3 sg,libata,sd_mod ext3 123337 3 jbd 56553 1 ext3 ehci_hcd 32973 0 ohci_hcd 23261 0 uhci_hcd 25421 0

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • How to configure DNS so that www.example.com goes to one server, *.example.com to another

    - by fishwebby
    I'm trying to set up my domain as follows, but I'm not actually sure if it's possible. I have a domain where I would like the base and www addresses to go to my static site, but others to go to my application server. For example: My domain is registered with Dreamhost, and my application is on a VPS at Webbynode. I've set up the domain in Dreamhost to use Webbynode's nameservers: ns1.dnswebby.com ns2.dnswebby.com ns3.dnswebby.com And in Webbynode I've set up a wildcard A record to point to the IP address of my VPS: * 1.2.3.4 A and this works nicely, if I go to app.example.com it resolves to my application server at Webbynode. However, what I'd like to do is have example.com and www.example.com go to my static site, hosted back at Dreamhost, whilst still having any other domain go to my app. What I've done to try and achieve this is set up these DNS "NS" entries at Webbynode, trying to get Dreamhost to resolve these domain names: (empty) ns1.dreamhost.com NS (empty) ns2.dreamhost.com NS (empty) ns3.dreamhost.com NS www ns1.dreamhost.com NS www ns2.dreamhost.com NS www ns3.dreamhost.com NS (I don't have a fixed IP address at Dreamhost so I can't just set up simple A records). However this doesn't work... does anyone have any idea if this is possible and if so how it could be done? Update: I've got this working now, as above for the domain (i.e. registered with Dreamhost, but using Webbynode's nameservers). To delegate the DNS for www.example.com to Dreamhost, I've got the following DNS entries set up: www.example.com. ns1.dreamhost.com. NS www.example.com. ns2.dreamhost.com. NS www.example.com. ns3.dreamhost.com. NS (note the full stops at the end) And to get example.com to resolve to my static site, I set up CNAME record: example.com. www.example.com. CNAME So now, example.com and www.example.com go to my static site on Dreamhost, and if they change the IP address of my shared hosting it won't affect me, and all other subdomains go to my application server. This seems to work nicely, but if anyone knows a better way to do it I'd be happy to hear it. Thanks to all who replied.

    Read the article

  • Python install issue on Mac OS X

    - by Michael Waterfall
    I have been using the standard python that comes with OS X Lion (2.7.2) but I wanted to build a UCS-4 version to handle 4-byte unicode characters better. I had already installed pip and packages like pytz, virtualenv and virtualenvwrapper, etc., and these are installed in /Library/Python/2.7/site-packages. My $PATH is /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin. To build a new version of python on the machine (outside of any project specific virtual environments, that will come later), I followed the instructions on this article and managed to build it in /usr/local/bin. The problem is that when I launched a new bash window, I got the following virtualenvwrapper error: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named virtualenvwrapper.hook_loader virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python and that PATH is set properly. The instructions said to move /usr/local/bin to the top of the /etc/paths file, and since then I've noticed some strange issues. I installed pip into /usr/local/bin and now I have assumed that since I'm working in /usr/local/bin, and the newly installed python's site packages is now located in /usr/local/lib/python2.7/site-packages, when I do pip freeze, it should be empty as nothing is installed there yet. However, pip freeze still reports things installed in the old (OS X) site-packages folder. Here's some info after the build: $ which python /usr/local/bin/python $ which pip /usr/local/bin/pip $ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin When I uninstall a python package with pip, it removes it from the old site-packages folder as expected. When I install it again, instead of installing it in /usr/local/lib/python2.7/site-packages, it installs it in /Library/Python/2.7/site-packages (verified by attempting to install it again and receiving Requirement already satisfied (use --upgrade to upgrade): pytz in /Library/Python/2.7/site-packages ). How is it getting that path for the old site-packages folder? Why won't it install it in the correct location for the python install it's using? I'm getting several other issues since promoting /usr/local/bin but I think if I understand this I'll be able to get somewhere. Can anyone see what's happening? If you need any more info I'll be happy to provide it.

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • Internet connection sharing between Windows XP and Windows 7

    - by Dave
    I bought my lil sister's netbooks for Christmas and I've been having a heck of a time trying to get Internet Connection Sharing to work. The host computer is a Windows XP box and it uses a US Cellular 3G modem dongle thingy to set it's Internet access. Additionally I have a hard wire plugged into the LAN1 port of the router described below. (I tried the WAN port out of desperation but things didn't seem happy that way.) Additionally they have a linksys router (can't remember specific model number, I will find this out) that I was using to take advantage of it's wireless capabilities. Originally thought about updating the router to use dd-wrt, but after reading the instructions it looked like to much of a pita (had to downgrade firmware, then install dd-wrt) to set up, eventually I caved, out of desperation, and ended up successfully installing dd-wrt on the router. I have DHCP turned off on the router, actually all I could select was DHCP forwarder. The netbooks both have windows 7 starter installed on them. Initially, I had the networks joined to a homegroup but I dropped that and everyone is able to see everyone in their respective network explorers. When I turn on Internet Connection Sharing on the host, its IP on the LAN changed to 192.168.0.1, so I arbitrarily decided to assign the router to port 192.168.0.100. When I connect the netbooks they get IPs dynamically. As I stated before, everyone can see everyone in the network explorer, and shares can be accessed. The weird thing is that everyone can ping the router but they cannot ping each others IPs. The status on the netbooks says that there is no Internet Connectivity. Another thing I tried was manually setting the DNS servers on the netbooks to the DNS servers that the host computer has. The funny thing is when I ping an outside domain such as google.com the IP address resolves, however I get no responses from the pings. When I tried plugging the host into the WAN port I could ping the router, nor could I access the router's web access admin. Another thing I tried was turning off the firewall on the netbooks and the firewall off on the host computer for the LAN connection, and they still could not ping each other. Also I thought I should be able to start a remote desktop connection but I couldn't do that either, I also checked to make sure that computers would in fact accept a request for remote desktop connections.

    Read the article

  • GeForce 8800GT not even giving basic output

    - by Sam
    My Dad bought a GeForce 8800GT graphics card quite a long time ago now. It has never worked in his PC. Print out from a dxdiag: System Information Time of this report: 4/13/2010, 19:52:40 Machine name: USER-PC Operating System: Windows Vista™ Home Premium (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.091208-0542) Language: English (Regional Setting: English) System Manufacturer: To Be Filled By O.E.M. System Model: To Be Filled By O.E.M. BIOS: Default System BIOS Processor: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (4 CPUs), ~2.3GHz Memory: 2046MB RAM Page File: 1045MB used, 3296MB available Windows Dir: C:\Windows DirectX Version: DirectX 10 DX Setup Parameters: Not found DxDiag Version: 6.00.6001.18000 32bit Unicode DxDiag Notes Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. DirectX Debug Levels Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) Display Devices Card name: ATI Radeon HD 2400 PRO Manufacturer: ATI Technologies Inc. Chip type: ATI Radeon Graphics Processor (0x94C3) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_94C3&SUBSYS_00000000&REV_00 Display Memory: 1012 MB Dedicated Memory: 245 MB Shared Memory: 767 MB Current Mode: 1280 x 960 (32 bit) (75Hz) Monitor: Generic PnP Monitor Driver Name: atiumdag.dll,atiumdva.dat,atitmmxx.dll Driver Version: 7.14.0010.0523 (English) DDI Version: 10 Driver Attributes: Final Retail Driver Date/Size: 8/22/2007 02:43:14, 3021312 bytes That info is from the current card that is installed in it and has been installed since its purchase roughly 3-4 years ago. When I physically install the card I put it into a purple slot on the motherboard that the old card was in (if I go into the device manager and select properties on the current card it confirms that the slot is a "PCI Slot 16 (PCI bus 2, device 0, function 0)") and boot up the computer but get absolutely no output. The screen that we have registers that it is connected to something (by not displaying the screen it does when the cable is unplugged) but just remains blank, no output at all. I recently took the card to my University and one of my friends who is better with hardware issues than I am tried it in his system and it worked perfectly. No issues whatsoever. I do not have a spec list for his system but I could get one if you need it. If you need any more information on this issue I will be happy to supply you with it as I am starting to get very annoyed with this problem ^_^

    Read the article

  • How to disable hiddev96 in linux (or tell it to ignore a specific device)

    - by Miky D
    I'm having problems with a CentOS 5.0 system when using a certain USB device. The problem is that the device advertises itself as a HID device and linux is happy to try to provide support for it: In /ver/log/messages I see a line that reads: hiddev96: USB HID 1.11 Device [KXX USB PRO] on usb-0000:00:1d.0-1 My question comes down to: Is there a way to tell linux to not use hiddev96 for that device in particular? If yes, how? If not, what are my options - can I turn hiddev96 off completely? UPDATE I should probably have been a bit more specific about what is going on. The machine is running Centos 5.0, and on top of it I'm running VMWare workstation with Windows XP - which is where the USB device is actually supposed to operate. All works fine for other USB devices (i.e. VMWare successfully connects the USB device to the guest OS and the OS can use it, but for this particular device VMWare connects it to the guest OS, but the OS can't read/write to it) Every attempt locks up the application that is trying to communicate with the device. I've reason to believe that it is because the device is a HID device and there's some contention between the Linux host and the Windows guest OS in accessing the device. Below is the output from modprobe -l|grep -i hid as requested by @Karolis: # modprobe -l | grep -i hid /lib/modules/2.6.18-53.1.14.el5/kernel/net/bluetooth/hidp/hidp.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetservo.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetkit.ko And here is the output of lsmod # lsmod Module Size Used by udf 76997 1 vboxdrv 65696 0 autofs4 24517 2 hidp 23105 2 rfcomm 42457 0 l2cap 29633 10 hidp,rfcomm tun 14657 0 vmnet 49980 16 vmblock 20512 3 vmmon 945236 0 sunrpc 144253 1 cpufreq_ondemand 10573 1 video 19269 0 sbs 18533 0 backlight 10049 0 i2c_ec 9025 1 sbs button 10705 0 battery 13637 0 asus_acpi 19289 0 ac 9157 0 ipv6 251393 27 lp 15849 0 snd_hda_intel 24025 2 snd_hda_codec 202689 1 snd_hda_intel snd_seq_dummy 7877 0 snd_seq_oss 32577 0 nvidia 7824032 31 snd_seq_midi_event 11073 1 snd_seq_oss snd_seq 49713 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi_event snd_seq_device 11725 3 snd_seq_dummy,snd_seq_oss,snd_seq snd_pcm_oss 42945 0 snd_mixer_oss 19009 1 snd_pcm_oss snd_pcm 72133 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss joydev 13313 0 sg 36061 0 parport_pc 29157 1 snd_timer 24645 2 snd_seq,snd_pcm snd 52421 13 snd_hda_intel,snd_hda_codec,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer ndiswrapper 170384 0 parport 37513 2 lp,parport_pc hci_usb 20317 2 ide_cd 40033 1 tg3 104389 0 i2c_i801 11469 0 bluetooth 53925 8 hidp,rfcomm,l2cap,hci_usb soundcore 11553 1 snd cdrom 36705 1 ide_cd serio_raw 10693 0 snd_page_alloc 14281 2 snd_hda_intel,snd_pcm i2c_core 23745 3 i2c_ec,nvidia,i2c_i801 pcspkr 7105 0 dm_snapshot 20709 0 dm_zero 6209 0 dm_mirror 28741 0 dm_mod 58201 8 dm_snapshot,dm_zero,dm_mirror ahci 23621 4 libata 115833 1 ahci sd_mod 24897 5 scsi_mod 132685 3 sg,libata,sd_mod ext3 123337 3 jbd 56553 1 ext3 ehci_hcd 32973 0 ohci_hcd 23261 0 uhci_hcd 25421 0

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • Windows cannot find the host name "download.microsoft.com" using DNS

    - by joedotnot
    When trying to download a file found on the Microsoft downloads center that starts with, for example, http://download.microsoft.com/download/6/8/7/(some_GUID)/(some_file_name.ext) i get a timeout with "Internet Explorer cannot display the webpage". More information says: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. Diagnose Connection problems says: Windows cannot find the host name "download.microsoft.com" using DNS Bear with me while i expand on the problem: It all started when i tried to download Windows XP mode for my Windows 7 machine. I went to the virtual PC site, then thru the motions of Windows Genuine Advantage which validated ok, but when it redirects to grab the file just times out with above error. (NB: i also tried with the latest Chrome and Firefox but no use due to the Genuine Advantage stuff, so i decided to stick with IE). I am behind an ADSL2+ modem router connecting via wireless (Win 7 Pro laptop); so i hop over to the desktop connected via ethernet (Vista Business), and same result; begin to think site download.microsoft.com site is down. So i give it a break an read up on EDNS, flushing the cache, hosts file, etc... Try again an hour later on the Win 7 machine, still no go; so i turn off the Win 7 (software) firewall, and lo and behold, i can connect and grab any files from download.microsoft.com; (...nice, so we have a Micro$0ft firewall preventing access to a Micro$0ft website, no wonder my auto-updates kept failing but that's another story). But i still am not happy that the desktop connected via ethernet still cannot get to download.microsoft.com, even though i turned off all firewalls, defenders, anti-virus, etc. What is so special / specific about the url download.microsoft.com, any other site is ok, including www.microsoft.com. Any networking guru know what's REALLY going on, and how can i get the desktop to connect? Ping download.microsoft.com - Ping request could not find host download.microsoft.com. Please check the name and try again. Ping google.com or even www.microsoft.com works gives me an IP address. NB: On the wireless laptop ping download.microsoft.com works, i get xxxx.ms.akamai.net [202.7.177.33].

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >