Search Results

Search found 2343 results on 94 pages for 'masterchris 99'.

Page 80/94 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • Crash in audio resampler with some audio rates - FFMPEG PHP ( Solved! )

    - by Olaf Erlandsen
    i have a problem with this command( FFMPEG PHP ): Command: ffmpeg -i 62f76f050494f0ed6a5997967c00c0c0.wmv -ss 0 -t 99 -y -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 -f flv 62f76f050494f0ed6a5997967c00c0c0.flv Output: FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6) configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.19. 0 / 1.19. 0 libswscale 0.11. 0 / 0.11. 0 libpostproc 51. 2. 0 / 51. 2. 0 [asf @ 0xe81670]max_analyze_duration reached Input #0, asf, from '/var/www/resources/tmp/62f76f050494f0ed6a5997967c00c0c0.wmv': Metadata: WMFSDKVersion : 12.0.7601.17514 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:00:50.87, bitrate: 2467 kb/s Stream #0.0: Audio: wmapro, 44100 Hz, stereo, flt, 256 kb/s Stream #0.1: Video: vc1, yuv420p, 950x460 [PAR 1:1 DAR 95:46], 25 fps, 25 tbr, 1k tbn, 25 tbc Output #0, flv, to '/var/www/resources/media/62f76f050494f0ed6a5997967c00c0c0.flv': Metadata: encoder : Lavf52.64.2 Stream #0.0: Video: flv, yuv420p, 950x460 [PAR 1:1 DAR 95:46], q=2-31, 200 kb/s, 1k tbn, 29.97 tbc Stream #0.1: Audio: libmp3lame, 11025 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Press [q] to stop encoding frame= 72 fps= 0 q=5.0 size= 0kB time=10.91 bitrate= 0.0kbits/s Multiple frames in a packet from stream 0 Warning, using s16 intermediate sample format for resampling frame= 141 fps=139 q=5.0 size= 103kB time=8.15 bitrate= 103.2kbits/s frame= 220 fps=144 q=5.0 size= 875kB time=10.92 bitrate= 656.6kbits/s frame= 290 fps=143 q=5.0 size= 1525kB time=13.74 bitrate= 909.1kbits/s frame= 356 fps=141 q=5.0 size= 2153kB time=15.99 bitrate=1103.1kbits/s frame= 427 fps=141 q=5.0 size= 2847kB time=18.70 bitrate=1247.0kbits/s frame= 497 fps=141 q=5.0 size= 3771kB time=21.16 bitrate=1460.0kbits/s frame= 575 fps=142 q=5.0 size= 4695kB time=24.61 bitrate=1563.0kbits/s frame= 639 fps=141 q=5.0 size= 5301kB time=26.80 bitrate=1620.2kbits/s frame= 703 fps=139 q=5.0 size= 5829kB time=29.36 bitrate=1626.2kbits/s frame= 774 fps=139 q=5.0 size= 6659kB time=32.39 bitrate=1684.0kbits/s frame= 842 fps=139 q=5.0 size= 7915kB time=35.27 bitrate=1838.6kbits/s frame= 911 fps=139 q=5.0 size= 9011kB time=37.98 bitrate=1943.4kbits/s frame= 975 fps=138 q=5.0 size= 9788kB time=40.59 bitrate=1975.3kbits/s frame= 1041 fps=138 q=5.0 size= 10904kB time=43.83 bitrate=2037.9kbits/s frame= 1115 fps=138 q=5.0 size= 11795kB time=46.24 bitrate=2089.8kbits/s frame= 1183 fps=138 q=5.0 size= 12678kB time=48.74 bitrate=2130.7kbits/s frame= 1247 fps=137 q=5.0 size= 13964kB time=51.36 bitrate=2227.5kbits/s frame= 1271 fps=136 q=5.0 Lsize= 15865kB time=58.86 bitrate=2208.1kbits/s video:15366kB audio:462kB global headers:0kB muxing overhead 0.238956% Problem: Warning, using s16 intermediate sample format for resampling I've also tried changing the parameter From -ar 44100 to -ar 11025 Thanks! Solution: Read this link: http://en.wikipedia.org/wiki/MP3#Bit_rate

    Read the article

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • Linux bcm43224 wifi adapter slows down a couple minutes after boot

    - by Blubber
    I just installed Ubuntu on my mid 2012 MacBook Air. Everything worked out of the box, but the wifi is showing some weird behavior. When I first login it's really fast, loading google.com is near instant, and browsing in general feels at least as smooth as it did on Mac OS. However, after a couple minutes the connection slows down dramatically, sometimes it takes over 5s to load google.com, a simple reboot fixes the problem for another couple minutes. Specs: Wifi: 02:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) Driver: open-source brcmsmac driver Kernel: Linux wega 3.8.0-21-generic #32-Ubuntu SMP Tue May 14 22:16:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Distro: Ubuntu 13.04 (uptodate) I tried a number of things, none of which actually helped Use proprietary sta driver from broadcom Installed firmware into /lib/firmware/brcms (which, as far as I can tell from logs, does not get loaded at all) Switch router to only use 2.4 OR 5 GHz Set router to only use a OR g OR n Set router to use AES encryption only Turned off power management on the adapter Set regulatory region to the correct value (NL) on both router and laptop Disable ipv6 Nothing seems to help, the slowdown always occurs. I did notice that the latency (ping google.com) stays roughly the same (around 9ms). Below is some more information that might be of use. $ lspci -nnk | grep -iA2 net 02:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01) Subsystem: Apple Inc. Device [106b:00e9] Kernel driver in use: bcma-pci-bridge $ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no $ lsmod Module Size Used by dm_crypt 22820 1 arc4 12615 2 brcmsmac 550698 0 coretemp 13355 0 kvm_intel 132891 0 parport_pc 28152 0 kvm 443165 1 kvm_intel ppdev 17073 0 cordic 12574 1 brcmsmac brcmutil 14755 1 brcmsmac mac80211 606457 1 brcmsmac cfg80211 510937 2 brcmsmac,mac80211 bnep 18036 2 rfcomm 42641 12 joydev 17377 0 applesmc 19353 0 input_polldev 13896 1 applesmc snd_hda_codec_hdmi 36913 1 microcode 22881 0 snd_hda_codec_cirrus 23829 1 nls_iso8859_1 12713 1 uvcvideo 80847 0 btusb 22474 0 snd_hda_intel 39619 3 videobuf2_vmalloc 13056 1 uvcvideo snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_cirrus bcm5974 17347 0 bluetooth 228619 22 bnep,btusb,rfcomm snd_hwdep 13602 1 snd_hda_codec lpc_ich 17061 0 videobuf2_memops 13202 1 videobuf2_vmalloc videobuf2_core 40513 1 uvcvideo videodev 129260 2 uvcvideo,videobuf2_core bcma 41051 1 brcmsmac snd_pcm 97451 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 16 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_hda_codec_cirrus mei 41158 0 soundcore 12680 1 snd apple_bl 13673 0 mac_hid 13205 0 lp 17759 0 parport 46345 3 lp,ppdev,parport_pc usb_storage 57204 0 hid_apple 13237 0 hid_generic 12540 0 ghash_clmulni_intel 13259 0 aesni_intel 55399 399 aes_x86_64 17255 1 aesni_intel xts 12885 1 aesni_intel lrw 13257 1 aesni_intel gf128mul 14951 2 lrw,xts ablk_helper 13597 1 aesni_intel cryptd 20373 4 ghash_clmulni_intel,aesni_intel,ablk_helper i915 600351 3 ahci 25731 3 libahci 31364 1 ahci video 19390 1 i915 i2c_algo_bit 13413 1 i915 drm_kms_helper 49394 1 i915 usbhid 47074 0 drm 286313 4 i915,drm_kms_helper hid 101002 3 hid_generic,usbhid,hid_apple $ dmesg | egrep 'b43|bcma|brcm|[F]irm' [ 0.055025] [Firmware Bug]: ioapic 2 has no mapping iommu, interrupt remapping will be disabled [ 0.152336] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 2.187681] pci_root PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-99] only partially covers this bridge [ 12.553600] bcma-pci-bridge 0000:02:00.0: enabling device (0000 -> 0002) [ 12.553657] bcma: bus0: Found chip with id 0xA8D8, rev 0x01 and package 0x08 [ 12.553688] bcma: bus0: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x22, class 0x0) [ 12.553715] bcma: bus0: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x17, class 0x0) [ 12.553764] bcma: bus0: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x0F, class 0x0) [ 12.605777] bcma: bus0: Bus registered [ 12.852925] brcmsmac bcma0:0: mfg 4bf core 812 rev 23 class 0 irq 17 [ 13.085176] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: false (implement) [ 13.085186] brcmsmac bcma0:0: brcms_ops_config: change power-save mode: false (implement) [ 20.862617] brcmsmac bcma0:0: brcmsmac: brcms_ops_bss_info_changed: associated [ 20.862622] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 20.862625] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 20.897957] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"wlan" Mode:Managed Frequency:5.22 GHz Access Point: E0:46:9A:4E:63:9A Bit Rate=65 Mb/s Tx-Power=17 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=63/70 Signal level=-47 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:13 Invalid misc:56 Missed beacon:0

    Read the article

  • Magento hosting on a budget

    - by spa
    I have to do a setup for Magento. My constraint is primarily ease of setup and fault tolerance/fail over. Furthermore costs are an issue. I have three identical physical servers to get the job done. Each server node has an i7 quad core, 16GB RAM, and 2x3TB HD in a software RAID 1 configuration. Each node runs Ubuntu 12.04. right now. I have an additional IP address which can be routed to any of these nodes. The Magento shop has max. 1000 products, 50% of it are bundle products. I would estimate that max. 100 users are active at once. This leads me to the conclusion, that performance is not top priority here. My first setup idea One node (lb) runs nginx as a load balancer. The additional IP is used with domain name and routed to this node by default. Nginx distributes the load equally to the other two nodes (shop1, shop2). Shop1 and shop2 are configured equally: each server runs Apache2 and MySQL. The Mysqls are configured with master/slave replication. My failover strategy: Lb fails = Route IP to shop1 (MySQL master), continue. Shop1 fails = Lb will handle that automatically, promote MySQL slave on shop2 to master, reconfigure Magento to use shop2 for writes, continue. Shop2 fails = Lb will handle that automatically, continue. Is this a sane strategy? Has anyone done a similar setup with Magento? My second setup idea Another way to do it would be to use drbd for storing the MySQL data files on shop1 and shop2. I understand that in this scenario only one node/MySQL instance can be active and the other is used as hot standby. So in case shop1 fails, I would start up MySQL on shop2, route the IP to shop2, and continue. I like that as the MySQL setup is easier and the nodes can be configured 99% identical. So in this case the load balancer becomes useless and I have a spare server. My third setup idea The third way might be master-master replication of MySQL databases. However, in my optinion this might be tricky, as Magento isn't build for this scenario (e.g. conflicting ids for new rows). I would not do that until I have heard of a working example. Could you give me an advice which route to follow? There seems not one "good" way to do it. E.g. I read blog posts which describe a MySQL master/slave setup for Magento, but elsewhere I read, that data might get duplicated when the slave lags behind the master (e.g. when an order is placed, a customer might get created twice). I'm kind of lost here.

    Read the article

  • Facing issues in setting up VPN connection(IKEv1) using iphone (Defult Cisco VPN client) and Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using iPhone (Defult Cisco VPN client) and Strongswan 4.5.0 server. The Strongswan server is running on Ubuntu Linux, which is connected to some wifi hotspot. This is the guide which was used. I generated CA, server and client certificate, with the only difference mentioned below. “While generating server certificate, as per link CN=vpn.strongswan.org instead of this I changed CN name to CN=192.168.43.212.” Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on iphone. After installation I notice that certificates are considered as trusted also. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212 Iphone eth0 interface ip address: 192.168.43.72. iphone is also attached with the same wifi hotspot. Below is the snapshot of client configurations. Description Strong swan Server 192.168.43.212 Account ipsecvpn Password ***** Use certificate ON Certificate client The above username and password are in sync with the ipsec.secrets file. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.72 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add With the above configurations when I enable VPN on iphone, it says Could not able to verify server certificate. I ran Wireshark on a Linux server and observe that initially some ISAKMP message exchanges happens between client and server, which are successful but before authorization, client is sending some informational message and soon after this client is showing error as popup Could not able to verify server certificate. Capture logs on Strongswan server and in server logs below errors are observed: From auth.log Apr 25 20:16:08 Linux pluto[4025]: | ISAKMP version: ISAKMP Version 1.0 Apr 25 20:16:08 Linux pluto[4025]: | exchange type: ISAKMP_XCHG_INFO Apr 25 20:16:08 Linux pluto[4025]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 25 20:16:08 Linux pluto[4025]: | message ID: 9d 1a ea 4d Apr 25 20:16:08 Linux pluto[4025]: | length: 76 Apr 25 20:16:08 Linux pluto[4025]: | ICOOKIE: f6 b7 06 b2 b1 84 5b 93 Apr 25 20:16:08 Linux pluto[4025]: | RCOOKIE: 86 92 a0 c2 a6 2f ac be Apr 25 20:16:08 Linux pluto[4025]: | peer: c0 a8 2b 48 Apr 25 20:16:08 Linux pluto[4025]: | state hash entry 8 Apr 25 20:16:08 Linux pluto[4025]: | state object not found Apr 25 20:16:08 Linux pluto[4025]: **packet from 192.168.43.72:500: Informational Exchange is for an unknown (expired?) SA** Apr 25 20:16:08 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 8 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | *time to handle event Apr 25 20:16:16 Linux pluto[4025]: | event after this is EVENT_RETRANSMIT in 2 seconds Apr 25 20:16:16 Linux pluto[4025]: | handling event EVENT_RETRANSMIT for 192.168.43.72 "ios1" #8 Apr 25 20:16:16 Linux pluto[4025]: | sending 76 bytes for EVENT_RETRANSMIT through wlan0 to 192.168.43.72:500: Apr 25 20:16:16 Linux pluto[4025]: | a6 a5 86 41 4b fb ff 99 c9 18 34 61 01 7b f1 d9 Apr 25 20:16:16 Linux pluto[4025]: | 08 10 06 01 e9 1c ea 60 00 00 00 4c ba 7d c8 08 Apr 25 20:16:16 Linux pluto[4025]: | 13 47 95 18 19 31 45 30 2e 22 f9 4d 85 2c 27 bc Apr 25 20:16:16 Linux pluto[4025]: | 9e 9b e1 ae 1e 35 51 6f ab 80 f5 73 3c 15 8d 20 Apr 25 20:16:16 Linux pluto[4025]: | 4b 46 47 86 50 24 3f 13 15 7d d5 17 Apr 25 20:16:16 Linux pluto[4025]: | inserting event EVENT_RETRANSMIT, timeout in 40 seconds for #8 Apr 25 20:16:16 Linux pluto[4025]: | next event EVENT_RETRANSMIT in 2 seconds for #10 Apr 25 20:16:16 Linux pluto[4025]: | rejected packet: Apr 25 20:16:16 Linux pluto[4025]: | Apr 25 20:16:16 Linux pluto[4025]: | control: Apr 25 20:16:16 Linux pluto[4025]: | 30 00 00 00 00 00 00 00 00 00 00 00 0b 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 6f 00 00 00 02 03 03 00 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | 02 00 00 00 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: | name: Apr 25 20:16:16 Linux pluto[4025]: | 02 00 01 f4 c0 a8 2b 48 00 00 00 00 00 00 00 00 Apr 25 20:16:16 Linux pluto[4025]: **ERROR: asynchronous network error report on wlan0 for message to 192.168.43.72 port 500, complainant 192.168.43.72: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)]** Anybody please provide some update about this error and how to solve this issue.

    Read the article

  • opennms postgres connection slow

    - by krisdigitx
    i am running the opennms application server on a physical server and the database on an ESXi VM. Recently the opennms webconsole has been very slow to load as such i deleted most of the events from the database table, now both servers have no load at all, and the psql connection from the application server to the database server is also very fast, but somehow opennms webconsole is still slow. this is the strace from the opennms process id: 18629 futex(0x2aaac77d8a84, FUTEX_WAIT_PRIVATE, 453, NULL <unfinished ...> 3015 futex(0x2aaabc4a2ee4, FUTEX_WAIT_PRIVATE, 323, NULL <unfinished ...> 10863 futex(0x2aaabbebaa94, FUTEX_WAIT_PRIVATE, 395, NULL <unfinished ...> 25260 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10859 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10982 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 3011 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 25260 futex(0x2aaae098fc28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10982 futex(0x2aaac0eaf928, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 3011 futex(0x2aaab0cb1728, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10859 futex(0x2aaac062c328, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 25260 <... futex resumed> ) = 0 10982 <... futex resumed> ) = 0 3011 <... futex resumed> ) = 0 10859 <... futex resumed> ) = 0 25260 futex(0x2aaabc38b6b4, FUTEX_WAIT_PRIVATE, 443, NULL <unfinished ...> 10982 futex(0x2aaabc5d7b94, FUTEX_WAIT_PRIVATE, 99, NULL <unfinished ...> 3011 futex(0x2aaac7c55334, FUTEX_WAIT_PRIVATE, 183, NULL <unfinished ...> 10859 futex(0x2aaabbb8c9d4, FUTEX_WAIT_PRIVATE, 347, NULL <unfinished ...> 10846 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10846 futex(0x2aaae9022428, FUTEX_WAKE_PRIVATE, 1) = 0 10846 futex(0x2aaabe0030b4, FUTEX_WAIT_PRIVATE, 251, NULL <unfinished ...> 20281 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 14100 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 2925 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10843 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 20281 futex(0x2aaac7e93628, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 14100 futex(0x2aaac04e8c28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 2925 futex(0x2aaaec085528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10843 futex(0x2aaab20b0528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> and shows lots of connection timeout??? i think its the connection between the java application and database which is causing issues. any ideas how to troubleshoot this???

    Read the article

  • Raid 1 array won't assemble after power outage. How do I fix this ext4 mirror?

    - by Forkrul Assail
    Two ext4 drives on Raid 1 with mdadm won't reassemble after the power went out for an extended period (UPS drained). After turning the machine back on, mdadm said that the array was degraded, after which it took about 2 days for a full resync, which completed without problems. On trying to remount the array I get: mount: you must specify the filesystem type cat /etc/fstab lines relevant to setup: /dev/md127 /media/mediapool ext4 defaults 0 0 dmesg | tail (on trying to mount) says: [ 1050.818782] EXT3-fs (md127): error: can't find ext3 filesystem on dev md127. [ 1050.849214] EXT4-fs (md127): VFS: Can't find ext4 filesystem [ 1050.944781] FAT-fs (md127): invalid media value (0x00) [ 1050.944782] FAT-fs (md127): Can't find a valid FAT filesystem [ 1058.272787] EXT2-fs (md127): error: can't find an ext2 filesystem on dev md127. cat /proc/mdstat says: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : active (auto-read-only) raid1 sdj[2] sdi[0] 2930135360 blocks super 1.2 [2/2] [UU] unused devices: <none> fsck /dev/md127 says: fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/md127 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> mdadm -E /dev/sdi gives me: /dev/sdi: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 3e6e9a4f:6c07ab3d:22d47fce:13cecfd0 Update Time : Tue Nov 13 20:34:18 2012 Checksum : f7d10db9 - correct Events : 27 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) boot@boot ~ $ sudo mdadm -E /dev/sdj /dev/sdj: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 7fb84af4:e9295f7b:ede61f27:bec0cb57 Update Time : Tue Nov 13 20:34:18 2012 Checksum : b9d17fef - correct Events : 27 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing) machine@user ~ dmesg | tail [ 61.785866] init: alsa-restore main process (2736) terminated with status 99 [ 68.433548] eth0: no IPv6 routers present [ 534.142511] EXT4-fs (sdi): ext4_check_descriptors: Block bitmap for group 0 not in group (block 2838187772)! [ 534.142518] EXT4-fs (sdi): group descriptors corrupted! [ 546.418780] EXT2-fs (sdi): error: couldn't mount because of unsupported optional features (240) [ 549.654127] EXT3-fs (sdi): error: couldn't mount because of unsupported optional features (240) Since this is Raid 1 it was suggested that I try and mount or fsck the drives separately. After a long fsck on one drive, it ended with this as tail: Illegal double indirect block (2298566437) in inode 39717736. CLEARED. Illegal block #4231180 (2611866932) in inode 39717736. CLEARED. Error storing directory block information (inode=39717736, block=0, num=1092368): Memory allocation failed Recreate journal? yes Creating journal (32768 blocks): Done. *** journal has been re-created - filesystem is now ext3 again *** The drive however still doesn't want to mount: dmesg | tail [ 170.674659] md: export_rdev(sdc) [ 170.675152] md: export_rdev(sdc) [ 195.275288] md: export_rdev(sdc) [ 195.275876] md: export_rdev(sdc) [ 1338.540092] CE: hpet increased min_delta_ns to 30169 nsec [26125.734105] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [26125.734115] EXT4-fs (sdc): group descriptors corrupted! [26182.325371] EXT3-fs (sdc): error: couldn't mount because of unsupported optional features (240) [27083.316519] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [27083.316530] EXT4-fs (sdc): group descriptors corrupted! Please help me fix this. I never in my wildest nightmares thought a complete mirror would die this badly. Am I missing something? Suggestions on fixing this? Could someone explain why it would resync after the powerout, only to seemingly nuke the drive? Thanks for reading. Any help much appreciated. I've tried everything I can think of, including booting and filesystem checking with SystemRescue and Ubuntu liveboot discs.

    Read the article

  • How to resolve high CPU + excessive stat("/etc/localtime") and clock_gettime(CLOCK_REALTIME) calls

    - by Yemster
    I've been experiencing really high CPU on a ruby on rails app (see stack below) and have been trying to diagnose the possible causes to no avail. Stack: ruby 1.9.3 rails 3.2.6 Apache/2.2.21 (Debian) Phusion Passenger 3.0.11 Whenever I run strace against the spiking Rack process PID (see Top excerpt below), I am seeing a tonne of stat("/etc/localtime") and clock_gettime(CLOCK_REALTIME) calls and have no idea how to stop these. Excerpt from Top showin running PID: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11674 www-user 20 0 313m 182m 5076 R 99 2.3 63:04.60 Rack: /var/www/my_rails_app/current 11634 www-user 20 0 411m 216m 5144 S 10 2.7 197:55.63 Rack: /var/www/my_rails_app/current Strace snippet below: [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 141474018}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 141577456}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 143073982}) = 0 [pid 11674] poll([{fd=15, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout) [pid 11674] write(15, "b\0\0\0\3SELECT `images`.* FROM `ima"..., 102) = 102 [pid 11674] read(15, "\1\0\0\1\0229\0\0\2\3def\23myappy_productio"..., 16384) = 2063 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 144138035}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 ... [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 154076443}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 154189429}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 157185700}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 157298770}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 165076003}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 165212572}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 167542679}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 167683436}) = 0 .... [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 62052248}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 62182486}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 62919948}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 63057266}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 63751707}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 73730686}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 75874687}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 76077133}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 78205019}) = 0 ... [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 89370879}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 89583247}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 91637614}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 91782149}) = 0 Have Google'd around and came across a number of suggestions which I've tried with no success. Things tried so far: Have tried setting time zone as recommended here Made no difference and issue still persists. Content of my /etc/localtime: TZif2UTCTZif2UTC UTC0 Have tried the recommended fix for the leapsecond bug: date -s 'date' No joy so far. I'm fresh out of ideas so any help/advice on how to diagnose or resolve would be greatly appreciated.

    Read the article

  • Possible to give one connection to each IP?

    - by Alice
    I am having overloading problems. Too many connections, and some IP has more than 20 connection at once. I do this command. netstat -anp |grep 'tcp\|udp' | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n To get total of connection and this is the output: 1 106.3.98.81 1 106.3.98.82 1 108.171.251.2 1 110.85.103.207 1 111.161.30.217 1 113.53.103.55 1 119.235.237.20 1 124.106.19.34 1 157.55.32.166 1 157.55.33.49 1 157.55.34.28 1 175.141.103.239 1 180.76.5.59 1 180.76.5.61 1 188.235.165.216 1 205.213.195.70 1 216.157.222.25 1 218.93.205.100 1 222.77.209.105 1 27.153.148.109 1 27.159.194.242 1 27.159.253.71 1 54.242.122.201 1 61.172.50.99 1 65.55.24.239 1 71.179.78.5 1 74.125.136.27 1 74.125.182.30 1 74.125.182.36 1 79.112.225.39 1 93.190.139.208 2 124.227.191.67 2 157.55.33.84 2 157.55.35.34 2 190.66.3.107 2 203.87.153.38 2 220.161.119.3 2 221.6.15.156 2 27.153.148.116 2 27.159.197.0 2 96.47.224.42 3 202.14.70.1 3 218.6.15.42 3 222.77.218.226 3 222.77.224.187 3 37.59.66.100 3 46.4.181.244 3 87.98.254.192 3 91.207.8.62 4 188.143.233.222 4 218.108.168.166 4 221.12.154.18 4 93.182.157.8 4 94.142.128.183 5 180.246.170.187 5 8.21.6.226 6 178.137.94.87 6 218.93.205.112 7 199.15.234.222 9 9 125.253.97.6 10 178.137.17.196 11 46.118.192.179 12 212.79.14.14 21 72.201.187.135 27 0.0.0.0 Can anyone give me some directions, my server crashed few times this week because of this. Thanks. EDIT: Alright, my error logs says: [Thu Oct 18 12:17:39 2012] [error] could not make child process 4842 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4843 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4855 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4856 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4861 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4869 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4872 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4873 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4874 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4875 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4876 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4880 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4882 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4885 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4897 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4900 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4901 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4906 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4907 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4925 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4926 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4927 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4931 exit, attempting to continue anyway [Thu Oct 18 12:17:40 2012] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20060613+lfs/curl.iso' - /usr/lib/php5/20060613+lfs/curl.iso: cannot open shared object file: No such file or directory in Unknown on line 0 [Thu Oct 18 12:17:45 2012] [notice] Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch configured -- resuming normal operations And I have over thousands of line saying:(each has different process id) [Thu Oct 18 12:17:38 2012] [error] child process 4906 still did not exit, sending a SIGKILL And I also have line saying: [Wed Oct 17 09:44:58 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 50 MaxClients 300 MaxRequestsPerChild 5000 </IfModule>

    Read the article

  • How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. I ran chkdsk again, this time: chkdsk /f /r. Produced no messages. Same result when deleting. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because these (empty) directories are still not deleteable, C:\1\2\Favorites\Wien\What To Do.. C:\1\2\Favorites\Photography\FIRE Same error as above: Here is what Explorer properties shows for both folders: Update #4 (another partial solution): Using harrymc's answer combined with thoroughly reading through this amazing MS-KB article which contains nearly everyone's idea and then some, inconspicuously titled: You cannot delete a file or a folder on an NTFS file system volume. I was able to delete the 2nd folder C:\1\2\Favorites\Photography\FIRE - the problem being that there was an invisible trailing space at the end. I got lucky when I did an auto-complete whilst playing around with the del "\\?\<path>" command which he suggested. NOTE: A normal del did NOT work, nor did deleting from explorer. Now all that is left is the first directory C:\1\2\Favorites\Wien\What To Do.. (yes I tried endlessly with multiple combinations of the above solution ;) Keep 'em coming! =)

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • My smtp server is spammed?

    - by Milos
    I have a server and the postfix client on it. Since several days, I noticed a lot of processes running there. When checked, there are a lot of emails sent. Here is an example from the mail log: Aug 18 11:54:56 mem postfix/smtpd[9963]: connect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/smtpd[9301]: connect from unknown[186.113.45.4] Aug 18 11:54:56 mem postfix/smtpd[9963]: 525E7114012D: client=dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/cleanup[9970]: 525E7114012D: message-id=<B55835C9027BFA9D16CCBB556DB2F48BB82DF004000480BA-db0c3ce8aa74446411898d0d2feb3001@email.filmforthoughtinc.com> Aug 18 11:54:56 mem postfix/qmgr[2581]: 525E7114012D: from=<[email protected]>, size=10702, nrcpt=1 (queue active) Aug 18 11:54:56 mem postfix/smtpd[9301]: EC52711401DC: client=unknown[186.113.45.4] Aug 18 11:54:57 mem postfix/smtpd[9963]: disconnect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:57 mem postfix/cleanup[8597]: EC52711401DC: message-id=<4C905D97606B436FE50C6F738DE014D9D84F2185BA815D81-1a4dbe6fc2bfcc8183f5faf901cfa15e@email.manguerasespecializadas.com> Aug 18 11:54:57 mem postfix/smtp[9971]: 525E7114012D: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.55/0/0.45/0.16, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:57 mem postfix/cleanup[10067]: 8B1E11140268: message-id=<[email protected]> Aug 18 11:54:57 mem postfix/bounce[10001]: 525E7114012D: sender non-delivery notification: 8B1E11140268 Aug 18 11:54:57 mem postfix/qmgr[2581]: 8B1E11140268: from=<>, size=12693, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/qmgr[2581]: 525E7114012D: removed Aug 18 11:54:57 mem postfix/qmgr[2581]: EC52711401DC: from=<[email protected]>, size=10978, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/smtp[10013]: connect to aspmx.l.google.com[2607:f8b0:400d:c03::1b]:25: Network is unreachable Aug 18 11:54:57 mem postfix/smtpd[9301]: disconnect from unknown[186.113.45.4] Aug 18 11:54:58 mem postfix/smtp[10013]: 8B1E11140268: to=<[email protected]>, relay=aspmx.l.google.com[74.125.22.26]:25, delay=0.5, delays=0.06/0/0.28/0.16, dsn=5.1.1, status=bounced (host aspmx.l.google.com[74.125.22.26] said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 l7si24621420qad.26 - gsmtp (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/qmgr[2581]: 8B1E11140268: removed Aug 18 11:54:58 mem postfix/smtp[9971]: EC52711401DC: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.66/0/0.44/0.12, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/cleanup[9970]: 414361140254: message-id=<[email protected]> Aug 18 11:54:58 mem postfix/bounce[10001]: EC52711401DC: sender non-delivery notification: 414361140254 Aug 18 11:54:58 mem postfix/qmgr[2581]: 414361140254: from=<>, size=13057, nrcpt=1 (queue active) Aug 18 11:54:58 mem postfix/qmgr[2581]: EC52711401DC: removed Aug 18 11:55:01 mem postfix/smtp[10002]: 414361140254: to=<[email protected]>, relay=manguerasespecializadas.com[99.198.96.210]:25, delay=2.9, delays=0.04/0/2.1/0.84, dsn=2.0.0, status=sent (250 OK id=1XJPGs-0007BE-OI) Aug 18 11:55:01 mem postfix/qmgr[2581]: 414361140254: removed IS my server attacked, spammed? How to check that? Thank you.

    Read the article

  • /phpTest/zologize/axa.php? Another botnet?

    - by M132
    Starring at the log made me think, what is /phpTest/zologize/axa.php and why are bots looking for it? Previously, I had lots of /HNAP1/ requests. Requesting /HNAP1/ from IPs from log revealed, that all of them were sent by Linksys routers. 3 months later, these requests turned out to be generated by a router worm called TheMoon. But requesting /phpTest/zologize/axa.php from these servers returns a 404 error. How these servers got infected, and how can I protect mine from this? 124.11.224.69 - - [02/Feb/2014:00:37:16 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 140.113.238.121 - - [21/Feb/2014:01:24:32 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 77.121.132.79 - - [22/Feb/2014:00:03:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 142.4.201.210 - - [24/Feb/2014:21:54:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 212.83.168.39 - - [24/Feb/2014:23:16:00 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 87.117.229.210 - - [26/Feb/2014:06:34:58 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 78.100.82.99 - - [26/Feb/2014:08:25:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 198.50.205.219 - - [26/Feb/2014:09:59:11 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 210.60.142.107 - - [27/Feb/2014:00:12:12 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 101.109.4.73 - - [27/Feb/2014:08:50:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.91.128.158 - - [27/Feb/2014:08:59:15 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 201.188.41.175 - - [27/Feb/2014:11:25:42 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 220.133.137.2 - - [27/Feb/2014:12:12:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 203.156.104.88 - - [28/Feb/2014:18:11:49 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.19.52.58 - - [28/Feb/2014:22:02:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 84.2.92.40 - - [28/Feb/2014:23:04:17 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 58.64.205.11 - - [01/Mar/2014:06:08:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 113.61.200.151 - - [01/Mar/2014:18:25:25 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 178.33.219.12 - - [03/Mar/2014:14:41:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 74.63.220.132 - - [04/Mar/2014:01:16:44 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 187.141.230.106 - - [04/Mar/2014:15:39:26 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 103.22.181.146 - - [09/May/2014:17:16:56 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 502 166 "-" "-" 176.31.200.14 - - [10/May/2014:19:52:24 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 124.120.92.70 - - [12/May/2014:16:19:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 219.85.198.142 - - [15/May/2014:19:21:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 80.84.53.226 - - [23/May/2014:08:58:25 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 87.213.11.165 - - [25/May/2014:06:20:27 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 122.116.220.106 - - [25/May/2014:07:10:21 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.128.30 - - [29/May/2014:02:43:49 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 142.4.197.135 - - [29/May/2014:11:36:45 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.243.65 - - [30/May/2014:01:59:53 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.164.221 - - [30/May/2014:14:04:16 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 140.127.182.15 - - [01/Jun/2014:14:45:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 218.166.43.21 - - [01/Jun/2014:16:07:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.188.140 - - [01/Jun/2014:19:11:46 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 94.23.211.173 - - [05/Jun/2014:00:52:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 120.117.105.201 - - [05/Jun/2014:04:39:39 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 187.172.27.146 - - [05/Jun/2014:10:20:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 203.195.219.91 - - [05/Jun/2014:10:53:42 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-"

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • RedHat 5.5 server does not show per processor memory utilization

    - by Mike S
    I have been searching all over internet but not finding any leads. I have a system with a memory leak that I am trying to troubleshoot. Unfortunately I am not able to see per processor memory utilization. Here are the outputs of TOP and PS commands. Linux SERVER_NAME 2.6.18-194.8.1.el5 #1 SMP Wed Jun 23 10:52:51 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux top - 09:17:13 up 18:43, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 375 total, 1 running, 373 sleeping, 0 stopped, 1 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32922828k total, 32776712k used, 146116k free, 267128k buffers Swap: 5245212k total, 0k used, 5245212k free, 32141044k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 10348 744 620 S 0.0 0.0 0:05.65 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.05 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 % ps -auxf | sort -nr -k 4 | head -10 Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ xfs 6205 0.0 0.0 23316 3892 ? Ss Aug19 0:00 xfs -droppriv -daemon uuidd 6101 0.0 0.0 60976 224 ? Ss Aug19 0:00 /usr/sbin/uuidd USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND smmsp 6130 0.0 0.0 57900 1784 ? Ss Aug19 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue rpc 5126 0.0 0.0 8052 632 ? Ss Aug19 0:00 portmap root 99 0.0 0.0 0 0 ? S< Aug19 0:00 [events/1] root 98 0.0 0.0 0 0 ? S< Aug19 0:00 [events/0] root 97 0.0 0.0 0 0 ? S< Aug19 0:00 [watchdog/31] root 96 0.0 0.0 0 0 ? SN Aug19 0:00 [ksoftirqd/31] root 95 0.0 0.0 0 0 ? S< Aug19 0:00 [migration/31] Any help with this is appretiate.

    Read the article

  • Mysqld not starting due to apparent db corruption

    - by pitosalas
    I am very new at admining mysql, and bad for me, something caused the db to get clobbered. There are many error messages in the log that I am not sure how to safely proceed. Can you give some tips? Here's the log: 110107 15:07:15 mysqld started 110107 15:07:15 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 110107 15:07:15 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 35 515914826. InnoDB: Doing recovery: scanned up to log sequence number 35 515915839 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1697553664 110107 15:07:15 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB: Error: trying to access page number 3522914176 in space 0, InnoDB: space name ./ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file fil0fil.c line 3922 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html InnoDB: about forcing recovery. mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=0 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 217599 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=(nil) Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x8139eec 0x83721d5 0x833d897 0x833db71 0x832aa38 0x835f025 0x835f7a3 0x830a77e 0x8326b57 0x831c825 0x8317b8d 0x82a9e66 0x8315732 0x834fc9a 0x828d7c3 0x81c29dd 0x81b5620 0x813d9fe 0x40fdf3 0x80d5ff1 New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 110107 15:07:15 mysqld ended

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Windows 8 Stuck on Start Screen File Search

    - by baturayd
    I have been searching someone else having the same issue but I couldn't find any. Here is my problem: I'm using Windows 8 Pro with Media Center. File search screen does not close itself after I make a file search within start screen. It stuck on that screen. I can't go back to desktop, therefore, task manager is inaccessible. Only way to get out of it is to sign out. It also looks like non-operational. It doesn't give me any result at all. It's just a blank screen with "Files" title. It used to work perfectly. Things I have done before coming here: Safe mode minimal boot to ensure no other 3rd party software interferes. sfc haven't found any inconsistencies. Search index has been rebuild. Normal boot with all 3rd party services and start-up items disabled. System restore to a date that I know this feature was functional. And by the way, I have installed all updates released so far. I strongly used file search via start menu back in Windows 7. This is an absolute game changer for me. I'm curious what causes this. I'll do a "system refresh" if I can't fix this. I'm working on it, I'll keep this thread updated should I find any fix. First update: I just discovered that file search screen gets stuck if it's invoked by typing query directly in start screen. It doesn't get stuck if it's invoked from win + x menu. I was able to "escape" from stuck screen with invoking it again by win + x menu. After rebuilding search index again, search suggestions started to appear again but still there is no file search functionality. Second update: "Results for" expression appears only for a second near to "Files" title when search is initialized. Third update: As a last resort, I finally tried to do a "System Refresh" which has also failed to refresh by giving an error after waiting almost 20 minutes at 99%. (Seriously?) After cold boot it began to undo changes. After changes were reverted, I boot the machine without doing anything further and bingo! Search began acting normal again. This is a totally weird solution. It obviously fixed certain system files during failed refresh, and kept those changes untouched because they were supposed to be that way at the first place. I'll keep this thread alive should anyone comes with a more logical explanation. Forth update: After a windows update, search functionality again stopped responding. It happened after a system update for the first time. Now I have a pretty good suspicion about system update, though I have no solid proof of its involvement with this problem.

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Create a Bootable Ubuntu 9.10 USB Flash Drive

    - by Trevor Bekolay
    The Ubuntu Live CD isn’t just useful for trying out Ubuntu before you install it, you can also use it to maintain and repair your Windows PC. Even if you have no intention of installing Linux, every Windows user should have a bootable Ubuntu USB drive on hand in case something goes wrong in Windows. Creating a bootable USB flash drive is surprisingly easy with a small self-contained application called UNetbootin. It will even download Ubuntu for you! Note: Ubuntu will take up approximately 700 MB on your flash drive, so choose a flash drive with at least 1 GB of free space, formatted as FAT32. This process should not remove any existing files on the flash drive, but to be safe you should backup the files on your flash drive. Put Ubuntu on your flash drive UNetbootin doesn’t require installation; just download the application and run it. Select Ubuntu from the Distribution drop-down box, then 9.10_Live from the Version drop-down box. If you have a 64-bit machine, then select 9.10_Live_x64 for the Version. At the bottom of the screen, select the drive letter that corresponds to the USB drive that you want to put Ubuntu on. If you select USB Drive in the Type drop-down box, the only drive letters available will be USB flash drives. Click OK and UNetbootin will start doing its thing. First it will download the Ubuntu Live CD. Then, it will copy the files from the Ubuntu Live CD to your flash drive. The amount of time it takes will vary depending on your Internet speed, an when it’s done, click on Exit. You’re not planning on installing Ubuntu right now, so there’s no need to reboot. If you look at the USB drive now, you should see a bunch of new files and folders. If you had files on the drive before, they should still be present. You’re now ready to boot your computer into Ubuntu 9.10! How to boot into Ubuntu When the time comes that you have to boot into Ubuntu, or if you just want to test and make sure that your flash drive works properly, you will have to set your computer to boot off of the flash drive. The steps to do this will vary depending on your BIOS – which varies depending on your motherboard. To get detailed instructions on changing how your computer boots, search for your motherboard’s manual (or your laptop’s manual for a laptop). For general instructions, which will suffice for 99% of you, read on. Find the important keyboard keys When your computer boots up, a bunch of words and numbers flash across the screen, usually to be ignored. This time, you need to scan the boot-up screen for a few key words with some associated keys: Boot menu and Setup. Typically, these will show up at the bottom of the screen. If your BIOS has a Boot Menu, then read on. Otherwise, skip to the Hard: Using Setup section. Easy: Using the Boot Menu If your BIOS offers a Boot Menu, then during the boot-up process, press the button associated with the Boot Menu. In our case, this is ESC. Our example Boot Menu doesn’t have the ability to boot from USB, but your Boot Menu should have some options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others. Try the options that start with USB until you find one that works. Don’t worry if it doesn’t work – you can just restart and try again. Using the Boot Menu does not change the normal boot order on your system, so the next time you start up your computer it will boot from the hard drive as normal. Hard: Using Setup If your BIOS doesn’t offer a Boot Menu, then you will have to change the boot order in Setup. Note: There are some options in BIOS Setup that can affect the stability of your machine. Take care to only change the boot order options. Press the button associated with Setup. In our case, this is F2. If your BIOS Setup has a Boot tab, then switch to it and change the order such that one of the USB options occurs first. There may be several USB options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others; try them out to see which one works for you. If your BIOS does not have a boot tab, boot order is commonly found in Advanced CMOS Options. Note that this changes the boot order permanently until you change it back. If you plan on only plugging in a bootable flash drive when you want to boot from it, then you could leave the boot order as it is, but you may find it easier to switch the order back to the previous order when you reboot from Ubuntu. Booting into Ubuntu If you set the right boot option, then you should be greeted with the UNetbootin screen. Press enter to start Ubuntu with the default options, or wait 10 seconds for this to happen automatically. Ubuntu will start loading. It should go straight to the desktop with no need for a username or password. And that’s it! From this live desktop session, you can try out Ubuntu, and even install software that is not included in the live CD. Installed software will only last for the duration of your session – the next time you start up the live CD it will be back to its original state. Download UNetbootin from sourceforge.net Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7Speed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >