Search Results

Search found 4691 results on 188 pages for 'ran biron'.

Page 164/188 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Installing the Newest KeePass for MacOSX from Source

    - by firebush
    I've transitioned our passwords to KeePass. LastPass looks cool, but I prefer a system where we control the database locally rather than it being kept in the cloud. I have a windows and Linux system and both are able to access our KeePass database easily. On my Linux system (Ubuntu), I simply installed KeePass via synaptic and it just worked. So everything was working great, until my wife tried to set up things on her MacBook to access the database. Huge problems. It was so easy on Linux that I didn't expect there to be issues there. In case it's helpful, she's running a fresh install of Mac OSX 10.5.8, Leopard. We simply went to the download site for KeePass: http://keepass.info/download.html Clicked on the link titled KeePass 2.x for Mac OS X from which we retrieved Mono 2.10.5 and KeePass 2.18 from that site (the packages posted there at the time of this writing). Mono installed without problems (at least, none that we saw). She opened the KeePass image and dragged it to the Application side, unpackaging it there. According to the instructions on the KeePass installation instructions, she opened a terminal, changed to the directory in /Applications containing KeePass.exe, and ran it through mono: mono KeePass.exe No application opens at all - we see a blip for it, but then it immediately goes away, indicating to us that it is crashing. Also disconcerting, I see that people are throwing fits about copy-and-paste not working for KeePass 2.18 on MacOSX. Judging from the 2.19 addresses the copy/paste issue. I'm hoping that version will solve all our issues. So here's my question: how can I try out 2.19 on her system. It doesn't seem to be packaged like the 2.18 one is. But we're not scared of building it. I see that the source for 2.19 is here (at the bottom of the page). Can I just download that to her machine somewhere and run something to build it? I'm familiar with automake but not with building .NET source, so please answer gently if this is stupidly easy. :^) btw: tomorrow's my wife's birthday, and this is getting her down. If you know how to navigate these issues, it would be a nice birthday gift for her. Thanks in advance! Update I'll post this since it might be helpful to someone else: I got KeePass 2.18 to run by updating Mono to 2.10.9 (rather than the 2.10.5 given by the site above). With the later version of Mono, it runs without crashing. And yet, I do see the copy and paste issue that others see. I can open a database on her machine, but the incorrect data get's copied. So again, can someone help me install KeePass 2.19? Thanks!

    Read the article

  • centos 6 ps aux hangs up

    - by Guntis
    I have problem with my server. Server is running centos 6 (CloudLinux Server release 6.2). uname -a = 2.6.32-320.4.1.lve1.1.4.el6.x86_64 That is a kvm guest. On host is debian 6. If i run command ps aux, it stuck on random process (shows some processes only), top command is working fine. htop doesn't work too (black screen). top - 12:11:51 up 34 min, 1 user, load average: 4.26, 6.71, 16.15 Tasks: 201 total, 7 running, 192 sleeping, 0 stopped, 2 zombie Cpu(s): 7.9%us, 2.8%sy, 0.0%ni, 87.5%id, 1.6%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 9862044k total, 2359484k used, 7502560k free, 171720k buffers Swap: 10485720k total, 0k used, 10485720k free, 1336872k cached server has one Intel(R) Xeon(R) CPU E5606 @ 2.13GHz, free -m total used free shared buffers cached Mem: 9630 2336 7293 0 170 1324 -/+ buffers/cache: 841 8789 Swap: 10239 0 10239 php -v PHP 5.3.19 (cli) (built: Nov 28 2012 10:03:07) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with the ionCube PHP Loader v4.2.2, Copyright (c) 2002-2012, by ionCube Ltd., and with Zend Guard Loader v3.3, Copyright (c) 1998-2010, by Zend Technologies with Suhosin v0.9.33, Copyright (c) 2007-2012, by SektionEins GmbH mysql Server version: 5.1.63-cll php -i disable_functions => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_e xec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php _uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_set sid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_de code, xmlrpc_server_create, putenv, show_source,mail => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, pos ix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exe c, syslog, system, xmlrpc_entity_decode, xmlrpc_server_create, putenv, show_source,mail ... suhosin.executor.disable_eval => Off => Off suhosin.executor.eval.blacklist => include,include_once,require,require_once,curl_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak, syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nice,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl_wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_cr eate, socket_create_listen,socket_create_pair,link,register_shutdown_function,register_tick_function,gzinflate => include,include_once,require,require_once,c url_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak,syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nic e,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl _wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_create, socket_create_listen,socket_create_pair,link,register_shutdown_function, register_tick_function,gzinflate Sometimes i cannot kill httpd process. I run kill -9 PID even several times, and nothing happens. php runs via suphp. I learned somewhere that it can be trojan. I ran strace ps aux and it stops on open("/proc/PID/cmdline", O_RDONLY) If i reboot server, problem is gone but after some time it is back again .. :( Thanks.

    Read the article

  • Why is my new Phenom II 965 BE not significantly faster than my old Athlon 64 X2 4600+?

    - by Software Monkey
    I recently rebuilt my 5 year old computer. I upgraded all core components, in particular from an Athlon 64 X2 4600+ at 2.4 GHz with DDR2 800 to a Phenom II 965 BE (quad core) at 3.6 GHz with DDR3 1333 (actually 1600, but testing consistently detected memory errors at 1600). The motherboard is also much newer and better. The HDD's (x3), DVD writer and card reader are the same. The BIOS memory config is auto-everything except the base timing which I overrode to 1T instead of 2T. The BIOS CPU multiplier is slightly over-clocked to 3.6 GHz from the stock 3.4 GHz. I noticed compiling Java is slower than I expected. As it happens I have some (single-threaded) Java pattern-matching code which is CPU and memory bound and for which I have performance numbers recorded on a number of hardware platforms, including my old system. So I did a test run on the new equipment and was stunned to find that the numbers are only slightly better than my old system, about 25%. The data set it is operating on is a 148,975 character array, which should easily fit in caches, but in any event the new CPU has larger caches all around. The system was, of course, otherwise idle for the test and the test run is a timed 10 seconds to eliminate scheduling anomalies. A long while ago, when I upgraded only memory from DD2 667 to DDR2 800 there was no change in performance of this test, which subjectively supports that the test cycle does not need to (significantly) access main memory, but yes it is creating and garbage collecting a large number of objects in the process of this test (low millions of matches are found for the pattern set). I am about 99.999% certain the code hasn't changed since I last ran it on 2009-03-17 - but I can't easily retest the old hardware, because it is currently in pieces on my work-bench waiting to be built into a new computer for my kids. Note that Windows (XP) reports a CPU speed of 795 MHz unless I have some thing running. With stuff running it seems to jump all over the place each time I use ALT-Pause to display the system properties, everywhere from 795 MHz to 3.4 Ghz. So why might my shiny new hardware under-performing so badly? EDIT: The old memory was Mushkin DDR2 800 with timings set for auto which should have been 5-5-5-12. The new memory is Corsair DDR3 1600, running at 1333 with timings also auto which are 9-9-9-21. In both cases they are a paired set of dual channel DIMMs. I was waiting to ensure my system was stable before tweaking with memory timings.

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • 64-bit linux kernel only seeing 3 of 4GB after upgrade...

    - by Blaine
    Hey everyone. I am running Ubuntu 9.04 64-bit on my macbook. I had 2GB of ram before, and everything ran great. I just upgraded to 2x2GB (4GB), but my system only sees 3GB of it. OS X, which I am dual booting, sees all 4GB. Also, my video performance is incredibly lacking. Before the upgrade my compiz benchmark was full at 80fps, and now it is at 22fps with very choppy window dragging. Has anyone ever heard of this on a 64-bit kernel? I just don't quite understand what could be the issue. 10$ uname -a Linux macbook 2.6.28-15-generic #49-Ubuntu SMP Tue Aug 18 19:25:34 UTC 2009 x86_64 GNU/Linux $ free -m total used free shared buffers cached Mem: 2953 1031 1921 0 114 427 -/+ buffers/cache: 489 2463 Swap: 7812 0 7812 9$ lsmod Module Size Used by i915 77960 2 drm 123232 3 i915 binfmt_misc 18572 1 ppdev 16904 0 btusb 21784 2 bridge 63776 0 stp 11140 1 bridge bnep 22912 2 vboxnetadp 109356 0 vboxnetflt 116972 0 vboxdrv 1721612 1 vboxnetflt uvcvideo 69640 0 compat_ioctl32 18304 1 uvcvideo videodev 45184 2 uvcvideo,compat_ioctl32 v4l1_compat 23940 2 uvcvideo,videodev lp 19588 0 parport 49584 2 ppdev,lp snd_hda_intel 557492 3 snd_pcm_oss 52352 0 snd_mixer_oss 24960 1 snd_pcm_oss snd_pcm 99464 2 snd_hda_intel,snd_pcm_oss arc4 10240 2 snd_seq_dummy 11524 0 ecb 11392 2 snd_seq_oss 41984 0 snd_seq_midi 15744 0 snd_rawmidi 33920 1 snd_seq_midi snd_seq_midi_event 16512 2 snd_seq_oss,snd_seq_midi snd_seq 66272 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event ath9k 310584 0 snd_timer 34064 2 snd_pcm,snd_seq snd_seq_device 16276 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq mac80211 251528 1 ath9k iTCO_wdt 21712 0 iTCO_vendor_support 12420 1 iTCO_wdt joydev 20992 0 video 29204 0 snd 78920 15 snd_hda_intel,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device applesmc 37700 0 output 11648 1 video soundcore 16800 1 snd pcspkr 11136 0 cfg80211 43680 1 mac80211 appletouch 19972 0 isight_firmware 11520 0 input_polldev 12688 1 applesmc intel_agp 39408 1 snd_page_alloc 18704 2 snd_hda_intel,snd_pcm led_class 13064 2 ath9k,applesmc hid_apple 15872 0 usbhid 47040 0 ohci1394 42164 0 ieee1394 108288 1 ohci1394 sky2 63364 0 fbcon 49792 0 tileblit 11264 1 fbcon font 17024 1 fbcon bitblit 14464 1 fbcon softcursor 10368 1 bitblit Some information from dmesg: [ 795.820163] ACPI: EC: GPE storm detected, transactions will use polling mode [ 1762.709516] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 1763.078130] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 2362.760889] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 2416.352084] ACPI: EC: missing confirmations, switch off interrupt mode. [ 3718.721095] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 3719.108914] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 4318.773266] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 9513.813066] CE: hpet increasing min_delta_ns to 15000 nsec [ 9693.815684] npviewer.bin[6736]

    Read the article

  • Pinning based on origin of a reprepro repository.

    - by Shtééf
    I'm on Ubuntu 10.04, and trying to set up a repository using reprepro. I'd also like the pin everything in that repository to be preferred over anything else, even if packages are older versions. (It will only contain a select set of packages.) However, I cannot seem to get the pinning to work, and believe it has something to do with the repository side of things, rather than the apt configuration on the client. I've taken the following steps to set up my repository Installed a web server (my personal choice here is Cherokee), Created the directory /var/www/apt/, Created the file conf/distributions, like so: Origin: Shteef Label: Shteef Suite: lucid Version: 10.04 Codename: lucid Architectures: i386 amd64 source Components: main Description: My personal repository Ran reprepro export from the /var/www/apt/ directory. Now on any other machine, I can add this (empty) repository over HTTP to my /etc/apt/sources.list, and run apt-get update without any errors: Ign http://archive.lan lucid Release.gpg Ign http://archive.lan/apt/ lucid/main Translation-en_US Get:1 http://archive.lan lucid Release [2,244B] Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Hit http://archive.lan lucid/main Packages Hit http://archive.lan lucid/main Sources In my case, now I want to use an old version of Asterisk, namely Asterisk 1.4. I rebuilt the asterisk-1:1.4.21.2~dfsg-3ubuntu2.1 package from Ubuntu 9.04 (with some small changes to fix dependencies) and uploaded it to my repository. At this point I can see the new package in aptitude, but it naturally prefers the newer Asterisk 1.6 currently in the Ubuntu 10.04 repositories. To try and fix that, I have created /etc/apt/preferences.d/personal like so: Package: * Pin: release o=Shteef Pin-Priority: 1000 But when I try to install the asterisk package, it will still prefer the 1.6 version over my own 1.4 version. This is what apt-cache policy asterisk shows: asterisk: Installed: (none) Candidate: 1:1.6.2.5-0ubuntu1 Version table: 1:1.6.2.5-0ubuntu1 0 500 http://nl.archive.ubuntu.com/ubuntu/ lucid/universe Packages 1:1.4.21.2~dfsg-3ubuntu2.1shteef1 0 500 http://archive.lan/apt/ lucid/main Packages Clearly, it is not picking up my pin. In fact, when I run just apt-cache policy, I get the following: Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.lan/apt/ lucid/main Packages origin archive.lan 500 http://security.ubuntu.com/ubuntu/ lucid-security/multiverse Packages release v=10.04,o=Ubuntu,a=lucid-security,n=lucid,l=Ubuntu,c=multiverse origin security.ubuntu.com [...] Unlike Ubuntu's repository, apt doesn't seem to pick up a release-line at all for my own repository. I'm suspecting this is the cause why I can't pin on release o=Shteef in my preferences file. But I can't find any noticable difference between my repository's Release files and Ubuntu's that would cause this. Is there a step I've missed or mistake I've made in setting up my repository?

    Read the article

  • IE8 Refuses to run Javascript from Local Hard Drive

    - by Josh Stodola
    I have a problem that just started at work recently and the network manager is certain he did not change anything with the group policy. Anyways, here is a detailed description of the problem. My machine is Windows XP SP3, and I use IE8 to browse. We have McAffee anti-virus software that I am unable to configure. I use the following file to test... <!DOCTYPE html> <html> <head> <title>Javascript Test</title> </head> <body> <script type="text/javascript"> document.write("<h1>PASS</h1>"); </script> <noscript> <h1>FAIL</h1> </noscript> </body> </html> When I open this file from the C: drive, it fails every time. If I execute it anywhere else (local/remote web server or on a mapped network drive), it works just fine. When I am simply browsing the Internet, Javascript on web sites works just fine. It is only failing on files running from my C: drive. Additionally, I have had a couple other programmers in the department try this file on their C: drive, and it works fine for them. So I don't believe it is a group policy thing. I need to fix this because I do extensive testing from my C: drive, and I am accustomed to doing so. I don't want to get into the habit of moving files to a different drive just to test. Things I have tried: Enabled "Allow Active Content to Run Files on My Computer" in Options | Advanced | Security Enabled "Allow Active Scripting" in Options | Security | Custom Level Verified that "Script" was not checked as disabled in Developer Toolbar Added localhost to Trusted Sites in Options Disabled McAffee completely (momentarily, with help from network admin) Used an older DOCTYPE in my test HTML page Re-installed IE8 completely Ran regsvr32 on the JScript.dll Slammed keyboard I am sure that there is a setting somewhere that will fix this problem, possibly in the registry. I would not be surprised if it was related to the developer toolbar. At this point I do not know where else to look. Can anyone help me resolve this problem? EDIT: Regardless of the bounty, this issue is still ongoing.

    Read the article

  • PC monitors shut off and system hangs while playing 3D games, but sound continues - Diagnosis?

    - by Jon Schneider
    Two days ago, I started running into a problem with my Windows PC: The PC's two connected monitors simultaneously lose signal and go black (as though the PC had been powered off). The keyboard's Numlock, Capslock, and Scroll Lights will become "stuck" in their current positions, as though the PC is hung. (For example, the Numlock light on the keyboard remains lit regardless of me pressing the Numlock key repeatedly.) No keyboard input does anything. (Ctrl+Alt+Del, Ctrl+Shift+Esc, Ctrl+C, etc.) However -- Whatever sound/music the PC was playing continues to play, and the PC's fans continue running, so the PC hasn't powered itself off or rebooted itself. Opening up the case, the graphics card is pretty hot to the touch. I had this happen 3 times in one evening. In all cases, I was playing a game with 3D graphics when the problem occurred (Torchlight, Minecraft, Magic: The Gathering 2012, Avadon: The Black Fortress demo). I have yet to have the problem happen when I'm not playing a game. This system has been running stable for about 2.5 years prior to this. I didn't make any changes to the system prior to the problem starting to occur. System specs: OS: Windows 7 64-bit Processor: Intel Core 2 Duo E7200 Wolfdale 2.53GHz Video Card: XFX GeForce 9800 GT 512 MB Motherboard: Foxconn P45A-S LGA 775 Intel ATX RAM: Corsair 4 GB (2x 2GB) DDR2-800 (PC2 6400) Full specs: New PC 2008 Troubleshooting tried so far (the problem occurred again after taking each of these steps, one at a time): Updated the video drivers with the latest drivers from NVidia's site. Opened up the case and cleaned out the video card and processor fans (both were pretty dirty). Installed and ran temperature monitor software. The processor idles at about 50 degrees C, and goes up to about 63 degrees C while playing a game (seems on the warm side, but not excessively so?). The software wasn't able to report the temperature of the GPU -- not sure this particular GPU supports software temperature readout? My initial diagnosis is that maybe the GPU is on its last legs (given that it seems to be running pretty hot, and the problem only occurs while playing 3D games). Does this seem likely? Or is it likely that this problem is caused by the processor, RAM, or motherboard? Or could this be a software issue of some kind? Thanks for any advice!

    Read the article

  • Win7 x64 unresponsive for a minute or so. HD failing?

    - by Gaia
    On a fully updated Win7 x64, every so often the system stalls for a minute or so. This has been going on for a couple months now. By stalling I mean the mouse responds and I can move windows around, but any window, any program, that is open becomes whiteish when I select it AND any new programs will not open. It doesn't matter what kind of program it is. When the stall stops all clicks I made (open new programs for example) take effect. Nothing shows up consistently (as in every time this happens) in the event log. Today though I was able to find something, but it doesn't reveal much other than the "system was unresponsive". It's a 7009 for "A timeout was reached (30000 milliseconds) while waiting for the Windows Error Reporting Service service to connect." It doesn't matter if I have any USB devices plug-in or not. I've ran Microsoft Security Essentials and Malwarebytes. While the machine is unresponsive, I've noticed that Drive D (the other partition on the single internal HD in this laptop) is displayed like this in explorer. This never occurs with Drive C or any other drive on the machine. . SMART report for the physical drive: Read benchmark by HD Tune 5 Pro, probably the most telling piece of the puzzle. Isn't this alone enough to see there is a problem with the drive, regardless of whether the unresponsiveness is caused by such purported problem? Here is a short hardware report: Computer: LENOVO ThinkPad T520 CPU: Intel Core i5-2520M (Sandy Bridge-MB SV, J1) 2500 MHz (25.00x100.0) @ 797 MHz (8.00x99.7) Motherboard: LENOVO 423946U Chipset: Intel QM67 (Cougar Point) [B3] Memory: 8192 MBytes @ 664 MHz, 9.0-9-9-24 - 4096 MB PC10600 DDR3 SDRAM - Samsung M471B5273CH0-CH9 - 4096 MB PC10600 DDR3 SDRAM - Patriot Memory (PDP Systems) PSD34G13332S Graphics: Intel Sandy Bridge-MB GT2+ - Integrated Graphics Controller [D2/J1/Q0] [Lenovo] Intel HD Graphics 3000 (Sandy Bridge GT2+), 3937912 KB Drive: ST320LT007, 312.6 GB, Serial ATA 3Gb/s Sound: Intel Cougar Point PCH - High Definition Audio Controller [B2] Network: Intel 82579LM (Lewisville) Gigabit Ethernet Controller Network: Intel Centrino Advanced-N 6205 AGN 2x2 HMC OS: Microsoft Windows 7 Professional (x64) Build 7601 The drive less than 1 year old. Do I have a defective drive? Seagate Tools diag says there is nothing wrong with the drive... UPDATE: I noticed that the windows error reporting service entered the running state then the stopped state and the space between the two events was exactly 2 minutes. Which error it was trying to report I don't know. I check the "Reliability Monitor" and it shows no errors to be reported. I've disabled the windows error reporting service to see if the problem stops.

    Read the article

  • usb_modeswitch not switching

    - by deniz
    After I upgraded from kernel 2.6.18 to 3.5.3 modeswitch started not to work for me. Although lsusb shows my usb modem, usb_modeswitch does not switch it. My system information is like below. I ran lsusb, dmesg, usb-devices and usb_modeswitch their output is like below. usb_modeswitch instead of switching my modem it says "No devices in default mode found. Nothing to do. Bye.". Can you offer a solution? Kernel: Linux 3.5.3 usb_modeswitch: 1.2.3-1 usb_modeswitch-data: 20120120-1 usbutils: 006-1 libusb: 1.0.8-0.1 root@localhost$ lsusb Bus 002 Device 029: ID 12d1:1446 Huawei Technologies Co., Ltd. root@localhost$ dmesg [70112.477080] usb 2-1.4: new high-speed USB device number 30 using ehci_hcd [70112.567757] scsi49 : usb-storage 2-1.4:1.0 [70112.567842] scsi50 : usb-storage 2-1.4:1.1 [70113.571433] scsi 49:0:0:0: CD-ROM HUAWEI Mass Storage 2.31 PQ: 0 ANSI: 2 [70113.572304] scsi 50:0:0:0: Direct-Access HUAWEI TF CARD Storage PQ: 0 ANSI: 2 [70113.574169] sr0: scsi-1 drive [70113.574223] sr 49:0:0:0: Attached scsi CD-ROM sr0 [70113.574250] sr 49:0:0:0: Attached scsi generic sg1 type 5 [70113.574350] sd 50:0:0:0: Attached scsi generic sg2 type 0 [70113.577173] sd 50:0:0:0: [sdb] Attached SCSI removable disk root@localhost$ usb-devices T: Bus=02 Lev=02 Prnt=02 Port=03 Cnt=01 Dev#= 30 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=12d1 ProdID=1446 Rev=00.00 S: Manufacturer=Huawei Technologies S: Product=HUAWEI Mobile C: #Ifs= 2 Cfg#= 1 Atr=c0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage I: If#= 1 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage root@localhost$ cat /etc/usb_modeswitch.d/12d1\:1446 # Huawei, newer modems TargetVendor= 0x12d1 TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" MessageContent="55534243123456780000000000000011062000000100000000000000000000" root@localhost$ usb_modeswitch -c /etc/usb_modeswitch.d/12d1:1446 -v 12d1 -p 1446 -W * usb_modeswitch: handle USB devices with multiple modes * Version 1.2.3 (C) Josua Dietze 2012 * Based on libusb0 (0.1.12 and above) ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1446 TargetVendor= 0x12d1 TargetProduct= not set TargetClass= not set TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" DetachStorageOnly=0 HuaweiMode=0 SierraMode=0 SonyMode=0 QisdaMode=0 GCTMode=0 KobilMode=0 SequansMode=0 MobileActionMode=0 CiscoMode=0 MessageEndpoint= not set MessageContent="55534243123456780000000000000011062000000100000000000000000000" NeedResponse=0 ResponseEndpoint= not set InquireDevice enabled (default) Success check disabled System integration mode disabled usb_set_debug: Setting debugging level to 15 (on) usb_os_find_busses: Skipping non bus directory devices usb_os_find_busses: Skipping non bus directory drivers usb_os_find_busses: Skipping non bus directory uevent usb_os_find_busses: Skipping non bus directory drivers_probe usb_os_find_busses: Skipping non bus directory drivers_autoprobe Looking for target devices ... No devices in target mode or class found Looking for default devices ... No devices in default mode found. Nothing to do. Bye. Thanks in advance.

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Expectations for NTFS file recovery

    - by Fred Hamilton
    Yesterday I booted my XP system, and as I looked up a minute later I saw the light blue screen and tail-end of that pre-boot diskcheck Windows sometimes does if it finds an error (or was previously told to run a diskcheck drung the next boot). I didn't worry about it at the moment... But then I looked at my "scratch" disk, which was a 70% full, 750GB hard disk...and it now looks like it has been freshly formatted. It doesn't have a single file on it, just the hidden "System Volume Information" file and 750GB of freedom from data. I looked at some of the recovery tools from the Free NTFS partition recovery question and decided to try PC INSPECTOR™ File Recovery 4.x initially. It ran overnight and afterwards returned a list of thousands of files it could recover. The odd thing was that the filenames were lost, but the file extensions were not (WTF?). And all of the files were exactly 1,472kB in size. I recovered a dozen PDFs as a test, and 80% of them displayed OK despite being padded out to 1.5MB (though I assume any files 1472kB are hosed). My primary question is: Is this the best I can expect from any file recovery software when trying to recover NTFS files? Or is there perhaps something better out there? I assume this is as good as it gets, but wanted to check in with the experts first. Bonus questions: What might have happened to my drive? I didn't intentionally format it. I've never seen a disk error cause the drive to suddenly become a clean, reformatted drive. Could some malicious/confused software have told my PC to format my disk on reboot? Is that even a function Windows XP has? Why can the file extensions be recovered but not the filename? Does NTFS really treat them as separate entities? I thought I had 8.3 naming turned off, but maybe that had something to do with it. Or maybe it looks at the data in the file and guesses the extension?

    Read the article

  • Powershell: Cannot connect via SSL

    - by JSWork
    Am following "secrets to powershell remoting" to setup an SLL account and seem to be missing a step. I ran Winrm create winrm/config/Listener?Address=*+Transport=HTTPS @{Hostname="redacted";CertificateThumbprint="redacted"} and got PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1184937132 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1184937132 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 5985 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1187163138 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1187163138 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 80 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_220862350 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_220862350 Name Value Type ---- ----- ---- Address * System.String Transport HTTPS System.String Port 5986 System.String Hostname redacted System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint redacted System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String Trouble is when i do this PS C:\Users\redacted> enter-pssession -Computername redacted -Credential redacted\redacted -UseSSL I get this Enter-PSSession : Connecting to remote server failed with the following error message : The client cannot connect to th e destination specified in the request. Verify that the service on the destination is running and is accepting requests . Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or Win RM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + enter-pssession <<<< -Computername redacted -Credential redacted\redacted -UseSSL + CategoryInfo : InvalidArgument: (redacted:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed This happens even when the firewall is off completely and when the machine tires to connect to itself locally. On top of that, despite the listners eing lsited on wsman, when I run PS WSMan:\localhost&gt; Get-PSSessionConfiguration I get Name PSVersion StartupScript Permission ---- --------- ------------- ---------- Microsoft.PowerShell 2.0 PS WSMan:\localhost&gt; Any ideas what I'm missing/doing wrong? edit: Windows 2003. Powershell v2.0

    Read the article

  • Our VPS is being used as a Warez mule

    - by Mikuso
    The company I work for runs a series of ecommerce stores on a VPS. It's a WAMP stack, 50gb storage. We use an archaic piece of ecommerce software which operates almost entirely client-side. When an order is taken, it writes it to disk and then we schedule a task to download the orders once every 10 minutes. A few days ago, we ran out of disk space, which caused orders to fail to be written. I quickly hopped on to delete some old logs from the mailserver and freed up a couple of GB pretty quickly, but I wondered how we could fill up 50gb will nothing much more than logs. Turns out, we didn't. Hidden deep within the c:\System Volume Information directory, we have a stack of pirated videos, which seem to have appeared (looking at the timestamps) over the past three weeks. Porn, American Sports, Australian cooking shows. A very odd collection. Doesn't look like an individual's personal tastes - more like the VPS is being used as a mule. We have a 5-attempts and you're blocked policy on our FTP server (plus, there is no FTP account with access to that directory), and the windows user account has had it's password changed recently. The main avenues are sealed - and logs can verify that. I thought I'd watch and see if it happened again, and yes, another cooking show has appeared this morning. I am the only one to know of this problem at my company, and only one of two with access to the VPS (the other being my boss, but no - it's not him). So how is this happening? Is there a vulnerability in some of the software on the VPS? Are the VPS owners peddling warez across our rented space? (can they do this?) I don't want to delete the warez in case it is seen as a hostile action against this outside force, and they choose to retaliate. What should I do? How do I troubleshoot this? Has this happened to anyone else before?

    Read the article

  • RFC 1918 address on open internet?

    - by longneck
    In trying to diagnose a failover problem with my Cisco ASA 5520 firewalls, I ran a traceroute to www.btfl.com and, much to my surprise, some of the hops came back as RFC 1918 addresses. Just to be clear, this host is not behind my firewall and there is no VPN involved. I have to connect across the open internet to get there. How/why is this possible? asa# traceroute www.btfl.com Tracing the route to 157.56.176.94 1 <redacted> 2 <redacted> 3 <redacted> 4 <redacted> 5 nap-edge-04.inet.qwest.net (67.14.29.170) 0 msec 10 msec 10 msec 6 65.122.166.30 0 msec 0 msec 10 msec 7 207.46.34.23 10 msec 0 msec 10 msec 8 * * * 9 207.46.37.235 30 msec 30 msec 50 msec 10 10.22.112.221 30 msec 10.22.112.219 30 msec 10.22.112.223 30 msec 11 10.175.9.193 30 msec 30 msec 10.175.9.67 30 msec 12 100.94.68.79 40 msec 100.94.70.79 30 msec 100.94.71.73 30 msec 13 100.94.80.39 30 msec 100.94.80.205 40 msec 100.94.80.137 40 msec 14 10.215.80.2 30 msec 10.215.68.16 30 msec 10.175.244.2 30 msec 15 * * * 16 * * * 17 * * * and it does the same thing from my FiOS connection at home: C:\>tracert www.btfl.com Tracing route to www.btfl.com [157.56.176.94] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms myrouter.home [192.168.1.1] 2 8 ms 7 ms 8 ms <redacted> 3 10 ms 13 ms 11 ms <redacted> 4 12 ms 10 ms 10 ms ae2-0.TPA01-BB-RTR2.verizon-gni.net [130.81.199.82] 5 16 ms 16 ms 15 ms 0.ae4.XL2.MIA19.ALTER.NET [152.63.8.117] 6 14 ms 16 ms 16 ms 0.xe-11-0-0.GW1.MIA19.ALTER.NET [152.63.85.94] 7 19 ms 16 ms 16 ms microsoft-gw.customer.alter.net [63.65.188.170] 8 27 ms 33 ms * ge-5-3-0-0.ash-64cb-1a.ntwk.msn.net [207.46.46.177] 9 * * * Request timed out. 10 44 ms 43 ms 43 ms 207.46.37.235 11 42 ms 41 ms 40 ms 10.22.112.225 12 42 ms 43 ms 43 ms 10.175.9.1 13 42 ms 41 ms 42 ms 100.94.68.79 14 40 ms 40 ms 41 ms 100.94.80.193 15 * * * Request timed out.

    Read the article

  • Can't get virtual desktops to show up on RDWeb for Server 2012 R2

    - by Scott Chamberlain
    I built a test lab using the Windows Server 2012 R2 Preview. The initial test lab has the following configuration (I have replaced our name with "OurCompanyName" because I would like it if Google searches for our name did not cause people to come to this site, please do the same in any responses) Physical hardware running Windows Server 2012 R2 Preview full GUI, acting as Hyper-V host (joined to the test domain as testVwHost.testVw.OurCompanyName.com) with the following VM's running on it VM running 2012 R2 Core acting as domain controller for the forest testVw.OurCompanyName.com (testDC.testVw.OurCompanyName.com) VM running 2012 R2 Core with nothing running on it joined to the test domain as testIIS.testVw.OurCompanyName.com A clean install of Windows 7, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it A clean install of Windows 8, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it I then ran "Add Roles and Features" from testVwHost and chose the "Remote Desktop Services Installation", "Standard Deployment", "Virtual machine-based desktop deployment". I choose testIIS for the roles "RD Connection Broker" and "RD Web Access" and testVwHost as "RD Virtualization Host" The Install of the roles went fine, I then went to Remote Desktop Services in server manager and wet to setup Deployment Properties. I set the certificate for all 3 roles to our certificate signed by a CA for *.OurCompanyName.com. I then created a new Virtual Desktop Collection for Windows 7 and Windows 8 and both where created without issue. On the Windows 7 pool I added RemoteApp to launch WordPad, For windows 8 I did not add any RemoteApp programs. Everything now appears to be fine from a setup perspective however if I go to https://testIIS.testVw.OurCompanyName.com/RDWeb and log in as the use Administrator (or any orher user) I don't see the virtual desktops I created nor the RemoteApp publishing of WordPad. I tried adding a licensing server, using testDC as the server but that made no difference. What step did I miss in setting this up that is causing this not to show up on RDWeb? If any additional information is needed pleas let me know. I have tried every possible thing I can think of and I am just groping around in the dark now. The virtual machines running on testVwHost The configuration screen for RD Services The Windows 7 Pool The Windows 8 Pool This is logged in as testVw\Administrator

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

  • Hard freeze on new computer

    - by mphair
    OCZ Gold 3x2GB 240-Pin DDR3 SDRAM PC312800 Palit NE5T240SFHD01 GeForce GT240 1GB 128-bit GDDR5 ASUS P7P55D-E LGA 1156 P55 SATA 6Gb/s USB 3.0 ATX Intel Motherboard Intel Core i7-860 Lynnfield 2.8GHz 8MB L3 Cache LGA 1156 95W Quad-Core Processor SAMSUNG 22X Optical drive (DVD+-R/RW) CORSAIR CMPSU-620HX 620W ATX12V V2.2 Windows 7 Ultimate x64 Brand new system (got it from newegg two days ago) and it booted up and installed windows and ran for a day just fine. Yesterday, I boot it up in the afternoon and run various games at full graphics for most of the day. I turn on WoW and play for a few hours and it hard stalls. No numlock switching, no mouse feedback but nothing going wrong on the screen. No BSOD. I wait a bit to see if the stall is just a temporary one, but then force shutdown the computer. Upon reboot, everything seems fine, windows sees that it didn't shut down properly but I go into normal boot and restart WoW. I'm able to load it up and start running around when it freezes again. This time when I restart, it doesn't even get to BIOS. It starts (power goes on) and it just hangs with no output to the monitors. I shut it off and went to bed. This morning, I turned it on and went into BIOS setup. I'm not terribly experienced with messing with BIOS settings but I checked over them the best I could. Everything seems fine so I boot into windows and browse the internet for a bit looking for a solution, hard freeze within 10 minutes. I restart and go into BIOS and check the CPU temperature, 40c. I'm kinda stumped here. Some people say it might be a memory issue, but why would it take so long for it to come up? Could it have been slowly accessing one memory stick at a time and then it just got to a bad one and that's what is causing it to fail? It seems odd that I don't get a BSOD from a hardware failure. Having the screen just halt with no input or output change seems like a software thing to me. Any thoughts?

    Read the article

  • Unable to delete all partitions on flash drive using Windows 7 OS??

    - by irrational John
    Recently I purchased an ADATA C802 8GB flash drive. Since the drive was new I decided to run some of the HD Tune Pro (v4.50) performance tests on it, mostly just for the heck of it. To avoid accidently destroying data HD Tune refuses to write to a drive unless there are no partitions on the drive. If you do attempt to write to a drive with partitions, it posts the message "Writing is disabled. To enable writing please remove all partitions." As you would expect, the ADATA came formatted with a single primary FAT32 partition in the Master Boot Record. But a number of unexpected things happened when I attempted to delete that partition. The first thing I tried was to use the Windows 7 (64-bit) Disk Management tool (diskmgmt.msc) to delete the partition. It would not let me. The context menu choice to delete that volume was not available. Next I opened up a command prompt window with Admin authority and ran diskpart. Diskpart deleted the volume for me. However, when I attempted to run an HD Tune write test on the drive I still got the "Writing is disabled" message. Huh??? So I fired up a utility I have which allows viewing drives at the sector level and verified that the partition table in the Master Boot Record was empty. No partitions. Yet HD Tune still thought there were partitions on the drive? So why was I still getting the "Writing is disabled" message from HD Tune Pro? And why wouldn't the Windows 7 Disk Management tool let me change the partitions on this drive. After doing the above, I plugged the ADATA into my MacBook. I was then able to format it as either a GPT or MBR partitioned drive with no problems. I am not looking for suggestions on how to format this drive. I can do that. What I do not understand and was hoping I might get insight into is why this drive behaves so strangely under Windows 7? And BTW, what's up with HD Tune Pro? BTW, if I plug the drive I formatted on my MacBook back into my Windows 7 64-bit system I still run into road blocks with the Disk Management tool. For example, I cannot delete all the GPT partitions on the ADATA so I can convert it into an MBR drive. I following Microsoft's instructions, the instructions just do not work with this ADATA flash drive. Anyone know what's up with this? It makes no sense to me. Has something changed in Windows 7 (Vista)??

    Read the article

  • rsync -c -i flags identical files as different

    - by Scott
    My goal: given a list of files on local server, show any differences to the files with the same absolute path on remote server; e.g. compare local /etc/init.d/apache to same file on remote server. "Difference" for me means different checksum. I don't care about file modification times. I also do not want to sync the files (yet); only show the diffs. I have rsync 3.0.6 on both local and remote servers, which should be able to do what I want. However, it is claiming that local and remote files, even with identical checksums, are still different. Here's the command line: $ rsync --dry-run -avi --checksum --files-from=/home/me/test.txt --rsync-path="cd / && rsync" / me@remote:/ where: "me" = my username; "remote" = remote server hostname current working directory is '/' test.txt contains one line reading "/etc/init.d/apache" OS: Linux 2.6.9 Running cksum on /etc/init.d/apache on both servers yields the same result. The files are the same. However, rsync output is: me@remote's password: building file list ... done .d..t...... etc/ cd+++++++++ etc/init.d/ <f+++++++++ etc/init.d/apache sent 93 bytes received 21 bytes 20.73 bytes/sec total size is 2374 speedup is 20.82 (DRY RUN) The output codes (see http://www.samba.org/ftp/rsync/rsync.html) mean that rsync thinks /etc is identical except for mod time /etc/init.d needs to be changed /etc/init.d/apache will be sent to the remote server I don't understand how, with --checksum option, and the files having identical checksums, that rsync should think they're different. (I've tried with other files having identical mod times, and those files are not flagged as different.) I did run this in /, and made sure (AFAIK) that it's run remotely in /, so even relative pathnames will still be correct. I ran rsync with -avvvi for more debug info, but saw nothing remarkable. I'm wondering: is rsync still looking at file mod times, even with --checksum? am I somehow not setting up the path(s) right? what am I doing wrong?

    Read the article

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • Port forwarding using a BT Home Hub 2.0 (Supplied to new BT Infinity Customers in the UK)

    - by Jasarien
    I don't usually have trouble with port forwarding, I've been able to do it successfully on a number of different routers, including Linksys, Belkin, Netgear and Apple (Time Capsule / Airport Extreme). So I'm quite confused here. I had been using my Apple Time Capsule as my router for a few years now, with several port mappings all working fine. But it died recently, so I've had to resort to using the BT Home Hub 2.0 that was supplied with my BT Infinity broadband subscription. The forwarding interface for the Home Hub is simplified for the most part, allowing you to select an application or game and assign it to a particular computer on the network which you choose from a list that the Home Hub has 'discovered'. My Mac Pro has a manually assigned static IP 192.168.1.4 and my router is static at 192.168.1. I have chosen SSH from the list of applications and assigned it to my Mac Pro (the only computer in the list currently). The Home Hub also has a feature to keep a DNS service updated, and I have set it to keep my external IP address updated on my hostname. This is how I had it setup in the past with other routers and not had trouble before. I am able to ping my hostname (and external IP) from outside the network and get a response. But when I try to connect using SSH, the connection times out. The Home Hub also has "Firewall settings". The currently selected setting is: Default: Allow all outgoing connections and block all incoming traffic. Games and application sharing is allowed. But I've tried changing it to: Disabled: All traffic is allowed to pass through your BT Home Hub to your devices. Note: you’ll still need to use the games and application sharing feature to make sure that certain applications work properly. And the connection still times out... So frustrating. The OS X firewall on my Mac is disabled, so I don't think that's in the way. I have tried setting the port forwarding manually, instead of relying on the preset "SSH" option (incase it's not using the port I expect). So I set up my own "application" (as the Home Hub calls it) and forwarded external port 22 TCP to internal port 22 TCP to 192.168.1.4 - but that just gives the same result - unable to connect. Next, with the router's firewall disabled and OS X's firewall disabled, I ran the Shields Up test (https://www.grc.com/x/ne.dll?bh0bkyd2) and the result was that all my service ports (0 - 1055) are in 'Stealth' mode. I.e. nothing even exists at my IP as far as any outsider is concerned... Strange. The only thing that seems to work is setting my Mac Pro as the DMZ - which I don't want to do for obvious reasons. Any help with this would be extremely appreciated, thanks.

    Read the article

  • hyperv machine guest loads slow

    - by Dani Avni
    this is by far one of the strangest things I have seen. I have a win 2008R2 cluster with a CSV. the CSV itself is on an iSCSI storage (hitachi HUS 110) basic config of the two hosts in the cluster is Dell R610 Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI these machine work great and I can load a 2008R2 test guest machine on them in less than 90 seconds after the above config is running for over a year, I now need to add a new host. now the host is Dell R620 (Still intel but different CPU) Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI I added this new host to the domain and to the cluster, I gave it access to the CSV and I tried loading the same guest machine that loads in 90 seconds in the other hosts. the machine loads in about 6 minutes. no matter how many times I try this the old hosts load the machine in about 90 seconds and this new host in around 6 minutes to eliminate any problems with the iSCSI connection, I added a new LUN and directly accessed it from the new host and I was working at around 300MB/s so no problem there. I also tested the connection between the other hosts and the new one and network is working fine there too. to eliminate problems in HyperV, I copied the machine to the local disk of the new host and it loaded in less than 20 seconds. now is the point were things get a lot stranger: in my tests I tried installing a fresh windows guest machine to the CSV from the new host. I noticed that while the fresh windows was installing, my test guest was loading in less than 90 seconds even on the new host (I repeated this a few times). If I paused the fresh install guest and tried loading the test guest again it loaded in 6 minutes. and again after I resumed the guest installation the test guest loaded fast. after the fresh windows was also loaded, I ran tests loading the fresh window and my test machine. each one of them loaded in about 5 minutes when I tried loading them separately. however when I started both of them in the same time they both loaded in around 2.5 minutes it seems that the iSCSI disk access is only working if it is under some load (although I never got to above 10% utilization according to the task manager) does anyone have any idea what could be the problem?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >