Search Results

Search found 8856 results on 355 pages for 'sharepoint upgrade'.

Page 340/355 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • compile lastest bnx2 driver on debian squeeze

    - by markus
    I want to upgrade the bnx2 network card driver in a Dell Power Edge R410. I downloaded the latest driver version from the broadcom website. If I want to compile the driver it fails with the following errors: make make -C bnx2/src KVER=2.6.32-5-amd64 PREFIX= make[1]: Entering directory `/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src' make -C /lib/modules/2.6.32-5-amd64/build SUBDIRS=/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src modules make[2]: Entering directory `/usr/src/linux-headers-2.6.32-5-amd64' Building modules, stage 2. MODPOST 2 modules make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-5-amd64' make[1]: Leaving directory `/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src' make -C bnx2x/src KVER=2.6.32-5-amd64 PREFIX= make[1]: Entering directory `/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src' make -C /lib/modules/2.6.32-5-amd64/build M=`pwd` modules make[2]: Entering directory `/usr/src/linux-headers-2.6.32-5-amd64' CC [M] /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.o In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1009:1: error: "PCI_VPD_LRDT_ID_STRING" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1327:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1011:1: error: "PCI_VPD_LRDT_RO_DATA" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1328:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1013:1: error: "PCI_VPD_LRDT_RW_DATA" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1329:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1019:1: error: "PCI_VPD_SRDT_END" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1334:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1032: error: conflicting types for ‘pci_vpd_lrdt_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1355: error: previous definition of ‘pci_vpd_lrdt_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1037: error: conflicting types for ‘pci_vpd_srdt_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1366: error: previous definition of ‘pci_vpd_srdt_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1042: error: conflicting types for ‘pci_vpd_find_tag’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1391: error: previous declaration of ‘pci_vpd_find_tag’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1077: error: conflicting types for ‘pci_vpd_info_field_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1377: error: previous definition of ‘pci_vpd_info_field_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1082: error: conflicting types for ‘pci_vpd_find_info_keyword’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1403: error: previous declaration of ‘pci_vpd_find_info_keyword’ was here make[5]: *** [/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.o] Fehler 1 make[4]: *** [_module_/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src] Fehler 2 make[3]: *** [sub-make] Fehler 2 make[2]: *** [all] Fehler 2 make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-5-amd64' make[1]: *** [bnx2x.o] Fehler 2 make[1]: Leaving directory `/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src' make: *** [l2build] Fehler 2

    Read the article

  • SQLExpress service unable to start Error code 17053

    - by Chris Sobolewski
    A user was instructed by their software support to upgrade a program and install SQLExpress as part of the installation process. Since that time, the service has been able to start, citing error 17053, which appears to be an authentication issue. Here is the error log: 2011-01-11 13:17:45.50 Server Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 5.1 (Build 2600: Service Pack 2) 2011-01-11 13:17:45.50 Server (c) 2005 Microsoft Corporation. 2011-01-11 13:17:45.50 Server All rights reserved. 2011-01-11 13:17:45.50 Server Server process ID is 3332. 2011-01-11 13:17:45.50 Server Authentication mode is WINDOWS-ONLY. 2011-01-11 13:17:45.50 Server Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG'. 2011-01-11 13:17:45.52 Server This instance of SQL Server last reported using a process ID of 2332 at 11/10/2010 2:15:24 PM (local) 11/10/2010 7:15:24 PM (UTC). This is an informational message only; no user action is required. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 2011-01-11 13:17:45.52 Server Registry startup parameters: 2011-01-11 13:17:45.52 Server -d c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf 2011-01-11 13:17:45.52 Server -e c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG 2011-01-11 13:17:45.52 Server -l c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf 2011-01-11 13:17:45.52 Server Error: 17113, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server Error 3(The system cannot find the path specified.) occurred while opening file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 4 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:08:21.34 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 12:47:20.85 spid5s SQL Trace ID 1 was started by login "sa". 2011-01-11 12:47:20.90 spid5s Starting up database 'mssqlsystemresource'. 2011-01-11 12:47:20.93 spid5s The resource database build version is 9.00.3042. This is an informational message only. No user action is required. 2011-01-11 12:47:21.21 spid5s Error: 15466, Severity: 16, State: 1. 2011-01-11 12:47:21.21 spid5s An error occurred during decryption. 2011-01-11 12:47:21.38 spid8s Starting up database 'model'. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x90. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x1. 2011-01-11 12:47:21.38 Server Error: 17826, Severity: 18, State: 3. 2011-01-11 12:47:21.38 Server Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2011-01-11 12:47:21.38 Server Error: 17120, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. One lead I had was to change the SQL logon account from "Network Service" to "Local System". Unfortunately, that is resulting in the error message The Security ID Structure is Invalid [0x80070539] Any help either uninstalling or getting SQLExpress running would be fantastic.

    Read the article

  • Neighbour table overflow on Linux hosts related to bridging and ipv6

    - by tim
    Note: I already have a workaround for this problem (as described below) so this is only a "want-to-know" question. I have a productive setup with around 50 hosts including blades running xen 4 and equallogics providing iscsi. All xen dom0s are almost plain Debian 5. The setup includes several bridges on every dom0 to support xen bridged networking. In total there are between 5 and 12 bridges on each dom0 servicing one vlan each. None of the hosts has routing enabled. At one point in time we moved one of the machines to a new hardware including a raid controller and so we installed an upstream 3.0.22/x86_64 kernel with xen patches. All other machines run debian xen-dom0-kernel. Since then we noticed on all hosts in the setup the following errors every ~2 minutes: [55888.881994] __ratelimit: 908 callbacks suppressed [55888.882221] Neighbour table overflow. [55888.882476] Neighbour table overflow. [55888.882732] Neighbour table overflow. [55888.883050] Neighbour table overflow. [55888.883307] Neighbour table overflow. [55888.883562] Neighbour table overflow. [55888.883859] Neighbour table overflow. [55888.884118] Neighbour table overflow. [55888.884373] Neighbour table overflow. [55888.884666] Neighbour table overflow. The arp table (arp -n) never showed more than around 20 entries on every machine. We tried the obvious tweaks and raised the /proc/sys/net/ipv4/neigh/default/gc_thresh* values. FInally to 16384 entries but no effect. Not even the interval of ~2 minutes changed which lead me to the conclusion that this is totally unrelated. tcpdump showed no uncommon ipv4 traffic on any interface. The only interesting finding from tcpdump were ipv6 packets bursting in like: 14:33:13.137668 IP6 fe80::216:3eff:fe1d:9d01 > ff02::1:ff1d:9d01: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:9d01, length 24 14:33:13.138061 IP6 fe80::216:3eff:fe1d:a8c1 > ff02::1:ff1d:a8c1: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:a8c1, length 24 14:33:13.138619 IP6 fe80::216:3eff:fe1d:bf81 > ff02::1:ff1d:bf81: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:bf81, length 24 14:33:13.138974 IP6 fe80::216:3eff:fe1d:eb41 > ff02::1:ff1d:eb41: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:eb41, length 24 which placed the idea in my mind that the problem maybe related to ipv6, since we have no ipv6 services in this setup. The only other hint was the coincidence of the host upgrade with the beginning of the problems. I powered down the host in question and the errors were gone. Then I subsequently took down the bridges on the host and when i took down (ifconfig down) one particularly bridge: br-vlan2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5286 (5.1 KiB) TX bytes:726 (726.0 B) eth0.2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1801 errors:0 dropped:0 overruns:0 frame:0 TX packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:126228 (123.2 KiB) TX bytes:1464 (1.4 KiB) bridge name bridge id STP enabled interfaces ... br-vlan2158 8000.0026b9fb162c no eth0.2158 br-vlan2159 8000.0026b9fb162c no eth0.2159 The errors went away again. As you can see the bridge holds no ipv4 address and it's only member is eth0.2159 so no traffic should cross it. Bridge and interface .2159 / .2157 / .2158 which are in all aspects identical apart from the vlan they are connected to had no effect when taken down. Now I disabled ipv6 on the entire host via sysctl net.ipv6.conf.all.disable_ipv6 and rebooted. After this even with bridge br-vlan2159 enabled no errors occur. Any ideas are welcome.

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • Can't sync filesystem without reboot

    - by Fabio
    I'm having an issue with a linux server. Once a week the running mysql instance hangs and there is no way to fully stop it. If I kill it, it remains in zombie status and init does not reap its pid. The server is used for staging deployments and some internal tools, so it's not under heavy load. The only process constantly used id mysql and for this I think that it's the only process which suffer of this issue. I've searched system logs for errors and the only thing I found is this error (repeated a couple of times) in dmesg output: [706560.640085] INFO: task mysqld:31965 blocked for more than 120 seconds. [706560.640198] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [706560.640312] mysqld D ffff88032fd93f40 0 31965 1 0x00000000 [706560.640317] ffff880242a27d18 0000000000000086 ffff88031a50dd00 ffff880242a27fd8 [706560.640321] ffff880242a27fd8 ffff880242a27fd8 ffff88031e549740 ffff88031a50dd00 [706560.640325] ffff88031a50dd00 ffff88032fd947f8 0000000000000002 ffffffff8112f250 [706560.640328] Call Trace: [706560.640338] [<ffffffff8112f250>] ? __lock_page+0x70/0x70 [706560.640344] [<ffffffff816cb1b9>] schedule+0x29/0x70 [706560.640347] [<ffffffff816cb28f>] io_schedule+0x8f/0xd0 [706560.640350] [<ffffffff8112f25e>] sleep_on_page+0xe/0x20 [706560.640353] [<ffffffff816c9900>] __wait_on_bit+0x60/0x90 [706560.640356] [<ffffffff8112f390>] wait_on_page_bit+0x80/0x90 [706560.640360] [<ffffffff8107dce0>] ? autoremove_wake_function+0x40/0x40 [706560.640363] [<ffffffff8112f891>] filemap_fdatawait_range+0x101/0x190 [706560.640366] [<ffffffff81130975>] filemap_write_and_wait_range+0x65/0x70 [706560.640371] [<ffffffff8122e441>] ext4_sync_file+0x71/0x320 [706560.640376] [<ffffffff811c3e6d>] do_fsync+0x5d/0x90 [706560.640379] [<ffffffff811c40d0>] sys_fsync+0x10/0x20 [706560.640383] [<ffffffff816d495d>] system_call_fastpath+0x1a/0x1f When this happens the only way to make everything working again is a full reboot, but in order to do that I'm forced to use this command after I've manually stopped all running processes echo b > /proc/sysrq-trigger otherwise normal reboot process hangs forever. I've tracked reboots script and I've found out that also the reboot process hangs on a sync call, this one in /etc/init.d/sendsigs (I'm on ubuntu) # Flush the kernel I/O buffer before we start to kill # processes, to make sure the IO of already stopped services to # not slow down the remaining processes to a point where they # are accidentily killed with SIGKILL because they did not # manage to shut down in time. sync I'm almost sure that the cause of this is an hardware issue (the RAID controller???) also because I've other two machines with the same hardware and software configuration and they don't suffer of this, but I can't find any hint in syslog or dmesg. I've also installed smartmontools and mcelog packages but none of them did report any issue. What can I do to track the cause of this issue? Today is happened again, here is the status of system after triggering a reboot init---console-kit-dae---64*[{console-kit-dae}] +-dbus-daemon +-mcelog +-mysqld---{mysqld} +-newrelic-daemon---newrelic-daemon---11*[{newrelic-daemon}] +-ntpd +-polkitd---{polkitd} +-python3 +-rpc.idmapd +-rpc.statd +-rpcbind +-sh---rc---S20sendsigs---sync +-smartd +-snmpd +-sshd---sshd---zsh---sudo---zsh---pstree +-sshd---sshd---zsh---sudo---zsh And here is the status of sync process # ps aux | grep sync root 3637 0.1 0.0 4352 372 ? D 05:53 0:00 sync i.e. Uninterruptible sleep... Hardware specs as reported by lshw I think the raid controller is a fake raid. I usually don't deal with hardware (and for the record I don't have physical access to it) description: Computer product: X7DBP () vendor: Supermicro version: 0123456789 serial: 0123456789 width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=normal frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=53D19F64-D663-A017-8922-0030487C1FEE *-core description: Motherboard product: X7DBP vendor: Supermicro physical id: 0 version: PCB Version serial: 0123456789 *-firmware description: BIOS vendor: Phoenix Technologies LTD physical id: 0 version: 6.00 date: 05/29/2007 size: 106KiB capacity: 960KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect edd int13floppy2880 acpi usb ls120boot zipboot biosbootspecification *-storage description: RAID bus controller product: 631xESB/632xESB SATA RAID Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 09 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list configuration: driver=ahci latency=0 resources: irq:19 ioport:18a0(size=8) ioport:1874(size=4) ioport:1878(size=8) ioport:1870(size=4) ioport:1880(size=32) memory:d8500400-d85007ff

    Read the article

  • SBS2003 to SBS2011 Migration - Installation Error

    - by Shawn Gradwell
    Microsoft Small Business Server 2003 to 2011 Migration. I followed the Migration Guide from Microsoft and the source server had no errors when running the various tests prior to the migration. I have completed the destination server setup using the Answer File and the server is up and running. It all looks good, I can access Exchange and AD and the only problem is the error message when you log in stating that the setup did not complete and to check the logs. Because all looks good I am continuing the migration to the destination server. I also have to state that this client does not use Sharepoint at all. Do I have to redo everything? Herewith the logs: [4992] 121016.225454.5905: Task: Starting Add User or Group access VSS registry. [4992] 121016.225454.7645: TaskManagement: In TaskScheduler.RunTasks(): The "ConfigureSharePointVSSRegistryTask" Task threw an Exception during the Run() call:System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) [4992] 121016.225454.7655: Setup: An error was encountered on the TME thread: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) [4956] 121016.225455.0685: Setup: _UnhandledExceptionHandler: Setup encountered an error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: The TME thread failed (see the inner exception). ---> System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter.TasksCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass._LaunchWizard() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.RealMain(String[] args) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.Main(String[] args) [4956] 121016.225455.0865: Setup: Removed the password. [4956] 121016.225455.0905: Setup: Deleting scheduled task at path Microsoft\Windows\Windows Small Business Server 2011 Standard with name Setup [4956] 121016.225455.8055: Setup: Removed SBSSetup from the RunOnce.

    Read the article

  • Severe mysqldump performance degradation using Centos Linux, 8GB PAE and MySQL 5.0.77

    - by Duncan Harris
    We use MySQL 5.0.77 on CentOS 5.5 on VMWare: Linux dev.ic.soschildrensvillages.org.uk 2.6.18-194.11.4.el5PAE #1 SMP Tue Sep 21 05:48:23 EDT 2010 i686 i686 i386 GNU/Linux We have recently upgraded from 4GB RAM to 8GB. When we did this the time of our mysqldump overnight backup jumped from under 10 minutes to over 2 hours. It also caused unresponsiveness on our plone based web site due to database load. The dump is using the optimized mysqldump format and is spooled directly through a socket to another server. Any ideas on what we could do to fix gratefully appreciated. Would a MySQL upgrade help? Anything we can do to MySQL config? Anything we can do to Linux config? Or do we have to add another server or go to 64-bit? We ran a previous (non-virtual) server on 6GB PAE and didn't notice a similar issue. This was on same MySQL version, but Centos 4.4. Server config file: [mysqld] port=3307 socket=/tmp/mysql_live.sock wait_timeout=31536000 interactive_timeout=31536000 datadir=/var/mysql/live/data user=mysql max_connections = 200 max_allowed_packet = 64M table_cache = 2048 binlog_cache_size = 128K max_heap_table_size = 32M sort_buffer_size = 2M join_buffer_size = 2M lower_case_table_names = 1 innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=1G innodb_log_file_size=300M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table [mysqldump] # Do not buffer the whole result set in memory before writing it to # file. Required for dumping very large tables quick max_allowed_packet = 64M [mysqld_safe] # Increase the amount of open files allowed per process. Warning: Make # sure you have set the global system limit high enough! The high value # is required for a large number of opened tables open-files-limit = 8192 Server variables: mysql> show variables; +---------------------------------+------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-5.0.77-linux-i686-glibc23/ | | binlog_cache_size | 131072 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/mysql/live/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | YES | | have_compress | YES | | have_crypt | YES | | have_csv | YES | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | YES | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | DISABLED | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | hostname | app.ic.soschildrensvillages.org.uk | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 1048576 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 1073741824 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | ibdata1:10M:autoextend | | innodb_data_home_dir | | | innodb_adaptive_hash_index | ON | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | ON | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 314572800 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_rollback_on_timeout | OFF | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 8 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 31536000 | | join_buffer_size | 2097152 | | key_buffer_size | 8384512 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | OFF | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 1 | | max_allowed_packet | 67108864 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 200 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 33554432 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | ndb_autoincrement_prefetch_sz | 1 | | ndb_force_send | ON | | ndb_use_exact_count | ON | | ndb_use_transactions | ON | | ndb_cache_check_time | 0 | | ndb_connectstring | | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 8192 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/mysql/live/mysqld.pid | | plugin_dir | | | port | 3307 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 131072 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 0 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql_live.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | GMT | | table_cache | 2048 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 0 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77 | | version_comment | MySQL Community Server (GPL) | | version_compile_machine | i686 | | version_compile_os | pc-linux-gnu | | wait_timeout | 31536000 | +---------------------------------+------------------------------------------------------------------+ 237 rows in set (0.00 sec)

    Read the article

  • Launchd item no longer firing in Snow Leopard

    - by ridogi
    A launchd item that was working in 10.5 is no longer working after my upgrade to 10.6. I am running 10.6.2 and I have recreated the launchd item and given it a new name and that one doesn't run either. I have found a link of people with the same problem on google groups but none of the advice in that link helps. My launchd item is not listed in /private/var/db/launchd.db/com.apple.launchd/overrides.plist or in any of the overrides.plist files in the subdirectories of /private/var/db/launchd.db/ I have also tried to set this up as both a user agent and a user daemon. My launchd item simply runs a shell script, which I have no problem launching manually. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.eric.tmnotify.launchd</string> <key>ProgramArguments</key> <array> <string>/<path_to>/tmnotify.sh</string> </array> <key>StartInterval</key> <integer>3600</integer> </dict> </plist> I have tried to load it by overriding the disabled key (even though it is not disabled in any of the overrides.plist files) with both: sudo launchctl load -F /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist sudo launchctl load -w /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist and after running either of them I can see that it is running by using sudo launchctl list but the shell script never fires. Edit: I have also put this in the formerly blank file at /private/var/db/launchd.db/com.apple.launchd.peruser.501/overrides.plist : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> </dict> </plist> I also tried inserting this alphabetically: <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> into the file /private/var/db/launchd.db/com.apple.launchd/overrides.plist but still no dice.

    Read the article

  • SQL2008R2 install issues on windows 7 - unable to install setup support files?

    - by Liam
    I am trying to install the above but am getting the following errors when its attempting to install the setup support files, This is the first error that occurs during installation of the setup support files TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDF039760%25401201%25401 This is the second error that occurs after clicking continue in the installer after the first error is generated TITLE: Microsoft SQL Server 2008 R2 Setup ------------------------------ The following error has occurred: SQL Server Setup has encountered an error when running a Windows Installer file. Windows Installer error message: The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance. Windows Installer file: C:\Users\watto_uk\Desktop\In-Digital\Software\Microsoft\SQL Server 2008 R2\1033_ENU_LP\x64\setup\sqlsupport_msi\SqlSupport.msi Windows Installer log file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110713_205508\SqlSupport_Cpu64_1_ComponentUpdate.log Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDC80C325 These errors are generated from an ISO package downloaded from Microsoft. I have also tried using the web platform installer to install the express version instead but the SQL Server Installation fails with that also. The management studio installs fine but not the server. I have checked to make sure that the Windows Installer is started and it is. Cant seem to find an answer for this anywhere as all previous reported issues appear to be related to XP. I did have the express edition installed on the machine previously but uninstalled it to upgrade to the full version, I wish I hadn't now. Can anyone kindly offer any advice or point me in the right direction to stop me going insane with this? Any advice will be appreciated. Update======================= After digging a bit deeper ive located details of the error from the setup log file, i can also upload the log file if required. MSI (s) (E8:28) [23:35:18:705]: Assembly Error:The module '%1' was expected to contain an assembly manifest. MSI (s) (E8:28) [23:35:18:705]: Note: 1: 1935 2: 3: 0x80131018 4: IStream 5: Commit 6: MSI (s) (E8:28) [23:35:18:705]: Note: 1: 2337 2: 0 3: Microsoft.SqlServer.GridControl.dll MSI (s) (E8:28) [23:35:22:869]: Product: Microsoft SQL Server 2008 R2 Setup (English) -- Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:916]: Internal Exception during install operation: 0xc0000005 at 0x000007FEE908A23E. MSI (s) (E8:28) [23:35:22:916]: WER report disabled for silent install. MSI (s) (E8:28) [23:35:22:932]: Internal MSI error. Installer terminated prematurely. Error 2337. The installer has encountered an unexpected error. The error code is 2337. Could not close file: Microsoft.SqlServer.GridControl.dll GetLastError: 0. MSI (s) (E8:28) [23:35:22:932]: MainEngineThread is returning 1603 MSI (s) (E8:58) [23:35:22:932]: RESTART MANAGER: Session closed. Installer stopped prematurely. MSI (c) (0C:14) [23:35:22:947]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (0C:14) [23:35:22:947]: MainEngineThread is returning 1601 === Verbose logging stopped: 13/07/2011 23:35:22 ===

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Determining the required depth and specifications for a server cabinet

    - by Bingu Bingme
    I'm trying to understand the considerations ("why") that go into determining the specifications ("what") for a rackmount server cabinet, in order to determine what sort of rack I should purchase for my home use. Since this is for home use, I won't be following certain best practices (eg. hot/cold aisle, not even air conditioning) and may be willing to sacrifice in various areas in order to reduce cost and footprint - but please advise if there are safety concerns or other considerations to note. The most basic specs for a server cabinet are the dimensions (external width x external depth x usable height). Width: commonly 600mm or 800mm (if the use case requires extra clearance around the sides, such as if there is lots of cabling). In my case and most common cases, I'm going to stick with 600mm. Height: Select a sufficiently tall rack to fit my equipment. But how much may I stuff into it? Eg, if there is a 15U rack, can I really populate it with 15U of servers, or should I leave 1U at top and bottom for air circulation? Depth: Racks commonly have external depth of 600mm (network equipment), 800mm, 1000mm, or even longer. I'm trying to see how to fit into the 800mm depth. With reference to http://www.server-racks.com/rack-mount-depth.html, I'm hoping to have the front and rear posts mounted ~ 28.5" (72cm) apart, which would leave only 8cm for front space and rear space. How much rear space (from rear posts to back of rack) do I really need? I won't use cable management arms, so can I mount a 72cm depth server since the power, KVM, network cables won't take up much depth? My most important equipment are all < 60cm depth (4U chassis) and should comfortably fit within the 800mm cabinet. The rest of the equipment are very old 1U servers that range from 65-72cm depth. I might still want to make further use of them, or I might discard them since they are so old. Even if the 72cm servers cannot be powered on in an 800mm rack, I should be able to use them as 1U shelves. But, what server depth can I expect to be able to operate? Or am I forced to upgrade to 1000mm depth racks in order to use any servers deeper than 60cm? With reference to best practices for HP racks, some other specs and installation considerations: There aren't any minimum recommendations for clearance on the sides of the rack. It is recommended to leave 48" front clearance. The 48" front clearance is based on 32" chassis depth, 13" to extend the rack rails and mate the inner/outer rails, and 3" for movement. If I don't use such rails (eg, use shelves instead), it should be sufficient to leave front clearance of chassis depth + 3". It is recommended to leave 30" rear clearance "to provide space for servicing the rack". I'm planning to back the rack into a corner of the room, and wheel it slightly out when I need to access the rear. If the wheeling plan is ok, I still need to know how much rear clearance is required for air circulation and ventilation purposes. Castor wheels and stabilising feet. Since I'm backing the rack into a corner of the room, I'll only be able to set the stabilising feet on the front corners. Thoughts on safety? The rack that I'm considering has front glass doors with side ventilation slits and fully perforated rear doors. I'm hoping this will be a good balance between temperature and noise (only ventilation slits facing out the front, while the rear is facing the walls). Or is the sound of high-rpm fans going to escape through the front slits anyway and destroy my sanity?

    Read the article

  • How to connect FreeBSD Jail to network

    - by jgtumusiime
    So recently I successfully installed and configured a freebsd jail and I would like to install software within my jail but I cannot connect to the network. I'm trying to setup an apache+php+mysql installation within the jail and have the webserver accessible by users. Here is my rc.conf for the jail. ... jail_enable="YES" # Set to NO to disable starting of any jails jail_list="mambo2" # Space separated list of names of jails jail_mambo2_rootdir="/usr/jails/j01" # jail's root directory jail_mambo2_hostname="mambo2.ug" # jail's hostname jail_mambo2_ip="192.168.100.174" # jail's IP address jail_mambo2_devfs_enable="YES" # mount devfs in the jail jail_mambo2_devfs_ruleset="mambo2_ruleset" # devfs ruleset to apply to jail here is my jail ifconfig output mambo2# ifconfig rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:c1:28:00:48:db media: Ethernet autoselect (100baseTX <full-duplex>) status: active plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500 lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 mambo2# It does not show the IP address I configured within /etc/rc.conf. But, when I list the running jails, it shows the right IP address. Here is a list of jails running [root@mambo /usr/home/jtumusiime]# jls JID IP Address Hostname Path 5 192.168.100.174 mambo2.ug /usr/jails/j01 I also created a /etc/resolv.conf for nameservers. This was not in existence so I'm not quite sure if it is necessary? mambo2# cat /etc/resolv.conf nameserver 192.168.100.251 nameserver 8.8.8.8 mambo2# my host has 4 ip addresses, 3 public and one private: 192.168.100.173 I tried creating a jail using ezjail and this does not work out. [root@mambo /usr/src]# ezjail-admin update -p -i Error: Cannot find your copy of the FreeBSD source tree in . Consider using 'ezjail-admin install' to create the base jail from an ftp server. [root@mambo /usr/src]# I have an updated copy of freebsd 7.1 source tree from SVN in /usr/src/ [root@mambo /usr/src]# svn info Path: . URL: http://svn.freebsd.org/base/release/7.1.0 Repository Root: http://svn.freebsd.org/base Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f Revision: 243371 Node Kind: directory Schedule: normal Last Changed Author: kensmith Last Changed Rev: 186660 Last Changed Date: 2009-01-01 01:57:14 +0300 (Thu, 01 Jan 2009) [root@mambo /usr/src]# and I did #make buildworld while building the first jail i.e mambo2 Here is an excerpt of ouput of ezjail-admin install ... 221 Goodbye. Trying 193.162.146.4... Connected to ftp.freebsd.org. 220 ftp.beastie.tdk.net FTP server (Version 6.00LS) ready. 331 Guest login ok, send your email address as password. 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. 200 Type set to I. 550 pub/FreeBSD-Archive/old-releases/i386/7.1-RELEASE/base: No such file or directory. 221 Goodbye. Could not fetch base from ftp.freebsd.org. Maybe your release (7.1-RELEASE) is specified incorrectly or the host ftp.freebsd.org does not provide that release build. Use the -r option to specify an existing release or the -h option to specify an alternative ftp server. Querying your ftp-server... The ftp server you specified (ftp.freebsd.org) seems to provide the following builds: Trying 193.162.146.4... total 10 drwxrwxr-x 13 1006 1006 512 Feb 20 2011 8.2-RELEASE drwxrwxr-x 13 1006 1006 512 Apr 10 2012 8.3-RELEASE lrwxr-xr-x 1 1006 1006 16 Jan 7 2012 9.0-RELEASE -> i386/9.0-RELEASE drwxrwxr-x 7 1006 1006 1024 Feb 19 2012 ISO-IMAGES -rw-rw-r-- 1 1006 1006 637 Nov 23 2005 README.TXT drwxrwxr-x 5 1006 1006 512 Nov 2 02:59 i386 I do not want to upgrade my freebsd installation. I have googled around; but all in vail. Thank you

    Read the article

  • Computer not POSTing "randomly"?

    - by smoth190
    I have a custom built PC that is exhibiting some...odd... behavior, something I've never seen before. It was working fine one day, and the next day, it wouldn't start. Seeing as I wanted an upgrade anyway, I purchased a new motherboard that was compatible with all my parts. While replacing the motherboard, I accidentally damaged the CPU. Well, I wanted a new one anyway... so I got a new one. Seeing as I was replacing a ton of parts already, I bought a new PSU because the old one was super loud. When I slapped it all together, it starts up, lights, fans, drives, they all start. But I get no display from the monitor. No beeps, which I believe means it doesn't POST. I figured it was the RAM, because after removing the sound card and graphics card, there was nothing else that I hadn't replaced. When I remove both sticks of RAM, I get a continuous beeping, and according to the mobo handbook, means no RAM. So I think the mobo is functional, or atleast partly. I bought new RAM, but it still didn't work. I tried 3 monitors, with both VGA and DIV. So it's probably not the monitor, either. Now, let me get to the random part. Every 20 or so boots (I should also mention, for about 3 out of 5 boots I have to unplug the PC because it won't powerdown via the button), it will POST and I'll get display. Then, after about 2 or 3 resets, it won't work again. This confuses me so much, because even when I change nothing, it will/will not work. My thought is that maybe it has something to do with the RAM not clearing or something. I also reset the CMOS battery, incase that had anything to do with it, but no eval. I found some weird suggestion online about holding the power button for 30 seconds while it was unplugged. That did nothing, but I didn't expect it would... I've replaced just about the entire computer, and all the parts are compatible. Done about everything I can think of, but nothing has worked. Hopefully someone can help me here. And as I side note: When I do get my computer to boot, it says my hardware has changed and I have to re-activate windows. But it says I have to call Microsoft to do it. So I get this fancy automated voice that asks me to enter in a code into windows, then it asks me "How many computers have you activated with this copy of Windows?". Well, I had it on my computer before I replaced everything, so I said 1. Then he yelled at me for violating my 1-use license. I dunno what's going on there, do I have to re-purchase Windows 7? And they wonder why people pirate software... That's just a bonus question, though. Specs: 8GB of DDR2 RAM (Corsair) AMD CPU (I don't know what GHz or model because I can't find the box... (I think its 4 cores of 2.8Ghz) ASRock A785GM-LE Motherboard

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • why is my centos, nginx, php,mysql server using so much memory

    - by kb2tfa
    i'm not sure where the problem lies. I have: centos 6 mysql php php-fpm wordpress (1 site). this is a dedicated server i'm learning on. as soon as i ran the web url the memory slammed to 125% of a 512k server, i had to upgrade to 1gig, so i'm not sure where the problem lies. I thought by switching to nginx from apache i would have more memory free, but i'm still stuck with the problem. before launching the site with nginx and mysqld running the site was about 5% memory. I reanamed my "my-small.cnf" to my.cnf and put in my /etc folder where the original was, but that seems not to have done it. after looking at my TOP results, i'm starting to think it may be php-fpm eating my memory, but not sure. is php-fpm the preferend way or is there something better to use. I read about possible memory leaks in php-fpm. here is what i have: php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf end php-fpm.conf

    Read the article

  • My PC suddenly doesn't detect the primary drive (SSD)

    - by smoth190
    My computer has been working fine for months, and it worked today, but tonight I went to start it up to find that my OCZ Vertex 2 isn't being found. When I turn on my computer, the loading screen gets stuck at "Detecting IDE drives...". After a while, it keeps going and lists the drives it finds. The first one in the list should be my Vertex 2, but it just says "None". The computer proceeds to get stuck on "Loading operating system...", which is understandable because the drive with the OS is "gone". My first thought was drive failure, but every time drives have crashed on me, they're still detected--they just don't work. This drive is an SSD, it's pretty new, and I had no problems beforehand. I find it hard to believe it failed. I'm sure it's possible, but I hope this isn't the case. There has been nothing strange going on at all with my PC, it's been running perfect until now. I was just about to do my monthly dskchk and defrag today. I popped in my Windows 7 Home Premium disk and booted from it. When I launched the repair tool, it didn't list any operating systems (because the drive is 100% missing...). When I've had disks crash before, it still listed the OS, you just couldn't do anything with it. I tried to restore from an image, but I don't have any of those, either. I opened the command console and listed the drivers with wmic logicaldisk get name. Only C: and D: came up. C: was my 1TB storage driver (luckily, all my stuff is here--only the OS is on the SSD!) and D: was the disk driver. So I still had an MIA drive... The SSD didn't come with any driver disks, so I can't install drivers. If there's a way to do this from a CD I can burn with my other PC, please let me know. What the heck do I do? Although only the OS is on my SSD, a new SSD is expensive. I'll probably also have to buy a new copy of Windows (an upgrade would be nice, though...) because I've found it eats my registration key when my PC crashes (and my thousands of dollars of Adobe programs, I'll be on the phone with tech support for a week to get those keys back). And I'll lose my registry, all my settings, all sorts of other stuff that I'll spend weeks restoring. My computer is a pain in the butt to take out and open up, so if I can't fix it, I'll try fiddling with the plug or putting it into a new computer, but not right now. Any help is greatly appreciated! The day when they make crash-less drives will be the day I live without worry.

    Read the article

  • Windows 7 is stuck at "Starting Windows" when I attempt to boot computer

    - by Eli
    Basically, whenever I turn on my computer, it gets to the Starting Windows phase and just stays there. The startup animation still plays, yet it gets nowhere. I have tried booting into safe mode, however it gets stuck at loading CLASSPNP.SYS. It then freezes there and doesn't continue booting. I have tried booting into recovery mode from the hard drive, and it freezes after displaying the background image. I have tried booting from a recovery CD, which works, and I was able to use system restore. However, using system restore did not fix it, and it still is stuck at the Starting Windows screen. I have tried booting a Windows CD (Windows 8 Retail Installer) to see if I could upgrade it to fix this issue, however that froze at a blank screen after it got past the boot logo. I have tried changing around the BIOS settings (including resetting), to no avail. I have tried re-plugging the internal PSU cables (this is a custom-built desktop), yet this has changed nothing. I can boot into a loopback Ubuntu install on the same drive, which works fine, other than the fact that it has issues with some of the USB ports and the network card. This system has worked fine for the past few months, completely stable, and nothing in the configuration has changed before this error started happening. Startup Repair on the Windows recovery CD doesn't find any issues. Unplugging my secondary hard drive or swapping around memory doesn't change anything. The hard drive itself is fine, it hasn't shown any signs of failure and once again, boots my other OS fine. If anyone could help with this, that would be great. I can't seem to find any possible solution to this. If it makes any difference, my system specs are as follows: AMD FX-8320 Gigabyte GA-970A-D3 4GB of DDR3 Radeon HD 6870 550w PSU I'd like to not have to reinstall Windows, for I have more than a terabyte of data that I would have to back up if that becomes the only option. EDIT: I have since tried the following: Tried the solution involving restoring files from RegBackup, which changed nothing. Tried testing everything with Hiren's boot CD, everything comes back as fine. Tried disabling everything unnecessary in the BIOS and unplugging everything unneeded, it still hangs. Tried swapping out every possible combination of RAM, it still has the same result. The RAM is not at fault it seems Tried every GPU I own (which is many!) and it still hangs at the exact same place. Tried minimizing the power consumption as much as possible, even using an old PCI graphics card. It still hangs at the same place in the same way, signifying that it's not the PSU at fault. Tried resetting the BIOS again, still nothing. Tried every possible combination of BIOS options, even downclocking everything, it still hangs in the same spot. Tried upgrading the BIOS from version FB to FD, which changed nothing. Based on this, I would conclude the motherboard to be at fault. Are there any other possibilities? I don't want to spend $150 for a new motherboard. EDIT 2: This is what it gets stuck at when I try to boot into safe mode: Note the slight graphical corruption at the top of the screen. No matter how I set up the system, this seems to be there. In addition, either it has stopped booting into safe mode now, or it takes upwards of 2+ hours, and I haven't left it running for that long.

    Read the article

  • Exchange Server 2007 Send and Receive Connectors

    - by Mistiry
    I have gotten awesome advice from users on here for getting Exchange on Windows SBS 2008 set up. I think this is the final piece and I'm ready for roll-out! I need to set up Exchange so that it RECEIVES mail from our existing mail server as a Forward [aliases on the existing mail server to forward mail from [email protected] to [email protected]] (not using the POP3 Connector), and SENDS mail through that server as well (sends from [email protected] to [email protected] and then out to the world, showing in the headers as from [email protected] or at absolute least have the reply-to set as this). Alternatively, as long as the .net email address doesn't show in the From and replies are directed to the .com account, email can go from Exchange to the outside world without directing through the existing mail server. External Domain: domain.com Internal Domain: domain.local Internet Domain Name Set in SBS Console: domain.net When I go to http://remote.domain.net I get the Remote Web Workspace, and can login to both Sharepoint and OWA. I can send an email from OWA to a GMail account. I receive it from [email protected], which is an alias of [email protected]. I cannot, however, send an email from OWA to ANY domain.com email addresses. I am also not receiving any email to this Exchange account (except for NDRs). When I try sending an email to a domain.com account, here is the error (I had to replace all < and with { and }): Delivery has failed to these recipients or distribution lists: [email protected] The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Generating server: IFEXCHANGE.domain.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a]) by IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a%10]) with mapi; Tue, 17 Aug 2010 14:14:14 -0400 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: John Doe {[email protected]} To: "[email protected]" {[email protected]} Date: Tue, 17 Aug 2010 14:14:12 -0400 Subject: asdf Thread-Topic: asdf Thread-Index: AQHLPjf+h6hA5MJ1JUu1WS4I4CiWeA== Message-ID: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} MIME-Version: 1.0 I hope I explained the situation well enough for someone to be able to explain to me what I'm missing. If I could, I'd be putting a 10K bounty, but unfortunately I've got only 74 reputation (hey, I'm a newbie here!). I'm pretty sure the obvious "RecipNotFound" error is why its not working, my question is how to resolve this. The email account exists, it receives mail just fine, yet when I send it from the Exchange server it fails. EDIT In OC-Hub Transport, the Email Address Policies has 2 entries. "Windows SBS Email Address Policy" is set up to: Include All Recipient Types, no conditions, and SMTP %[email protected]. "Default Policy" set to: Include All Recipient Types, no conditions, and SMTP @domain.net. Three Authoritative Accepted domains domain.com domain.local (Default) domain.net Remote Domains tab has two entries. Default with domain * Windows SBS Company Web Domain with domain companyweb.

    Read the article

  • Force SSRS 2008 to use SSRS 2005 CSV rendering

    - by Kash
    We are upgrading our report server from SSRS 2005 to SSRS 2008 R2. I have an issue with CSV export rendering for SSRS 2008 where the SUM of columns are appearing on the right side of the detail values in 2008 instead of the left side like in 2005 as shown in the below blocks. 117 and 131 are the sums of Column2 and Column3 respectively. SSRS 2005 CSV Output Column2_1,Column3_1,Column2,Column3 117,131,1,2 117,131,1,2 117,131,60,23 117,131,30,15 117,131,25,89 SSRS 2008 CSV Output Column2,Column3,Column2_1,Column3_1 1,2,117,131 1,2,117,131 60,23,117,131 30,15,117,131 25,89,117,131 I understand that the CSV renderer has gone through major changes in SSRS 2008 R2 with the support for charts and gauges and more importantly it provides 2 modes: the default Excel mode and Compliant mode. But neither mode helps fix this issue. The Compliant mode was supposed to be closest to that of 2005 but apparently it is not close enough for my case. My Question: Is there a way to force SSRS 2008 fall back a report to a backward compatibility mode so that it exports into a 2005 CSV format? Solution tried: a) Using 2005-based CRIs Based on this article on ExecutionLog2, if SSRS 2008 R2 encounters a report whose auto-upgrade is not possible (e.g. reports that were built with 2005-based CustomReportItem controls), those particular reports will be processed with the old Yukon engine in a "transparent backwards-compatibility mode". It seems like it falls back to its previous version mode (2005) and attempts to render it. So I tried using a 2005-based barcode CustomReportItem and deployed to a SSRS 2008 R2 report server, but it shows the same result as before though it suppressed the barcode. This would be because SSRS 2008 R2 finds a way to suppress part of the report output and displays the rest. It would be great to find a 2005-based CRI that makes SSRS 2008 R2 process it with its old Yukon engine. Please note that quite possibly, even if it uses the "old Yukon processing engine", it might still use the new CSV renderer hence it shows the same output. If that is true, then this option is moot. b) Using XML renderer We can use a custom XML renderer and then use XSLT to convert the xml to appropriate CSV but this would mean that we need to convert all our 200 reports. Hence this is not feasible. Please note that we do not have the option of having SSRS 2005 and SSRS 2008 R2 deployed side by side.

    Read the article

  • Application Architecture using WCF and System.AddIn

    - by Silverhalide
    A little background -- we're designing an application that uses a client/server architecture consisting of: A server which loads server-side modules, potentially developed by other teams. A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module). The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.) Environment is .NET 3.5, and client side is WPF. The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues. My thinking so far: A Windows Service for the server. Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules. The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.) Similarly, the client uses System.AddIn to find, load, and communicate with the client modules. Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF. For maximum resilience, each module will run in a separate app-domain. In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.) My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start. So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road? Thanks.

    Read the article

  • MongoMapper and migrations

    - by Clint Miller
    I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model: class SomeModel include MongoMapper::Document key :some_key, String end Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this: class SomeModel include MongoMapper::Document key :some_key, String key :some_new_key, String, :required => true end How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this class SomeModel include MongoMapper::Document key :some_new_key, String, :required => true end But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space? With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 - version2 migration) and to delete the values for some_key (in the version2 - version3 migration). What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist? EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.

    Read the article

  • Android Marketplace Error: "The server could not process your apk. Try again."

    - by jdandrea
    I have an updated apk - tested successfully on various devices and simulator instances - with the following manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.myCompany.appName" android:versionCode="2" android:versionName="1.0.1"> <uses-sdk android:minSdkVersion="3" android:targetSdkVersion="5" /> <uses-permission android:name="android.permission.INTERNET" /> <supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" /> <application android:icon="@drawable/icon" android:label="@string/icon_name" android:debuggable="false"> <activity android:name=".myActivity" android:configChanges="keyboardHidden|orientation"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> When I post to Android Marketplace as an upgrade to my existing 1.0 app, I get the aforementioned ambiguous message: "The server could not process your apk. Try again." I've searched elsewhere for this message in hopes of finding out what might be happening, to no avail. (A popular suggestion is to move the uses-sdk element to the top of the manifest, but as you can see it's already at the top.) Clues welcome/appreciated. Update: I just tried to upload the same file again. Now I get a new message: The new apk's versionCode (2) in AndroidManifest.xml must be higher than the old apk's versionCode (2). The server could not process your apk. Try again. Soooo Marketplace did get my upgraded apk after all? (The very first accepted apk's versionCode was 1, so this update was of course bumped to 2.) Confused … Bumping it up to 3 and trying again. Surprise surprise, I get the original "could not process" error all over again. Going in circles. Hmm ... :( Nuther Update: If I exit and re-enter the Marketplace page, now it shows that the app has been uploaded! Except there's no app icon. Curiouser and curiouser ... and this is all happening with a cache-cleared (standards-friendly) browser to boot. So - do I trust the upload? Or start over ... with versionCode="4"? All I want is to get a solid "Upload successful, here's the icon, ready to publish" type of response.

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >