Search Results

Search found 19125 results on 765 pages for 'hard disk'.

Page 147/765 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk

    - by Tim Huffam
    This error occurred on our TFS2008 build server which we had upgraded to cater for VS2010 projects (by installing VS2010 on the build server - see this article). Error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. However - although we had installed VS2010 on the build server - we had not installed the web development components (Visual Web Developer) - this is what caused the error. To fix - simply add the web development components: Go into Control Panel - Add or Remove Programs Select Microsoft Visual Studio 2010, and click on Change/Remove In the VS Maintenance Mode screens, select Add or Remove Features In the Setup - Options page make sure 'Visual Web Developer' is checked. Click on Update.   You shouldn't need to restart your build service. HTH Tim

    Read the article

  • Installing Ubuntu 12.04.1 x64 with Fake RAID 1 [SOLVED]

    - by Arkadius
    I had: Software: Dual boot with Windows XP Ubuntu 10.04 LTS x32 Hardware Fake RAID 1 (mirroring) with 2x1 TB: Partition 1 - Windows Partition 2 - SWAP Partition 3 - / (root) Partition 4 - Extended Partition 5 - /home Partition 6 - /data arek@domek:/var/log/installer$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sda1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sda2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sda3 528506370 570468149 20980890 83 Linux /dev/sda4 570468150 1953118439 691325145 5 Extended /dev/sda5 570468213 675340469 52436128+ 83 Linux /dev/sda6 675340533 1953118439 638888953+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sdb1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sdb2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sdb3 528506370 570468149 20980890 83 Linux /dev/sdb4 570468150 1953118439 691325145 5 Extended /dev/sdb5 570468213 675340469 52436128+ 83 Linux /dev/sdb6 675340533 1953118439 638888953+ 83 Linux arek@domek:/var/log/installer$ ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Oct 7 20:17 control lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha -> ../dm-0 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha2 -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha3 -> ../dm-3 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha4 -> ../dm-4 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha5 -> ../dm-5 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha6 -> ../dm-6 I wanted to upgrade from 10.04 x32 to 12.04 x64 using FRESH installation. So, run installation of Ubuntu 12.04.1 x64 LTS using alternate CD. During the installation I selected manual partitioning and to: - Use and Format / (root) - Use and Format SWAP - Use and Keep data on /home - Use and Keep data on /data After I clicked "Continue" I get error creating and formatting SWAP partition. I go to terminal with Alt + F2 (?) and hit enter. I discovered that there was visible RAID as only disk with NO partitions. Something like this: arek@domek:/var/log/installer$ ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Oct 7 20:17 /dev/mapper/pdc_jhjbcaha -> ../dm-0 arek@domek:/var/log/installer$ ls -l /dev/dm* brw-rw---- 1 root disk 252, 0 Oct 7 20:17 /dev/dm-0 So I switched to log console Alt+F3 (?) and saw errors like below: Oct 7 14:02:45 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:45 anna-install: Installing dmraid-udeb Oct 7 14:02:45 anna[12599]: DEBUG: retrieving dmraid-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving libdmraid1.0.0.rc16-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving kpartx-udeb 0.4.9-3ubuntu5 Oct 7 14:02:49 disk-detect: Serial ATA RAID disk(s) detected. Oct 7 14:02:55 disk-detect: Enabling dmraid support. Oct 7 14:02:55 disk-detect: RAID set "pdc_jhjbcaha" was activated Oct 7 14:02:55 HERE --> dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha Oct 7 14:02:56 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (libnewt0.52): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (ext2-modules): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: INFO: Menu item 'partman-base' selected Oct 7 14:02:57 kernel: [ 316.512999] NTFS driver 2.1.30 [Flags: R/O MODULE]. Oct 7 14:02:57 kernel: [ 316.523221] Btrfs loaded Oct 7 14:02:57 kernel: [ 316.534781] JFS: nTxBlock = 8192, nTxLock = 65536 Oct 7 14:02:57 kernel: [ 316.554749] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled Oct 7 14:02:57 kernel: [ 316.555336] SGI XFS Quota Management subsystem Oct 7 14:02:58 md-devices: mdadm: No arrays found in config file or automatically Oct 7 14:02:58 partman: No matching physical volumes found Oct 7 14:02:58 partman: No volume groups found Oct 7 14:02:58 partman: Reading all physical volumes. This may take a while... Oct 7 14:02:58 partman-lvm: No volume groups found Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:06:11 HERE --> partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory Oct 7 14:07:28 init: starting pid 401, tty '/dev/tty2': '-/bin/sh' Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface lo As You can see there are 2 errors Oct 7 14:02:55 dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha and Oct 7 14:06:11 partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory I looked in the internet and try to run command "dmraid -ay" and get something like that: dmraid -ay /dev/mapper/pdc_jhjbcaha -> Already activated /dev/mapper/pdc_jhjbcaha1 -> Successfully activated /dev/mapper/pdc_jhjbcaha2 -> Successfully activated /dev/mapper/pdc_jhjbcaha3 -> Successfully activated /dev/mapper/pdc_jhjbcaha4 -> Successfully activated /dev/mapper/pdc_jhjbcaha5 -> Successfully activated /dev/mapper/pdc_jhjbcaha6 -> Successfully activated Then I returned to installer with Alt+F1 (?) and click "Return" to return to partitioning menu. I did NOT change anything just selected again "Continue" and everything goes smoothly. I hope this will help someone. arkadius

    Read the article

  • How do I take an image/backup of Ubuntu partition and restore to VirtualBox VM

    - by whizkid
    I have Ubuntu 10.04 installed on an older hard disk. I recently bought a new disk and already installed Windows 7. I dont want to use the older disk anymore, and I would like to keep on using Ubuntu in a virtual machine on the new disk(to avoid the possible mess-ups of dual boot and I found VirtualBox is the best free tool for this). I wish to keep the exact same data\programs\configurations\settings I had been using in Ubuntu for so long, and avoid the tedious part of having to reconfigure so many things. How do I backup\restore Ubuntu to another disk? I would prefer a free tool to do the backup\restore.

    Read the article

  • How to use more than 3 virtual disks in Linux using CentOS and XenServer

    - by 010110110101
    I've attached 5 virtual disks to a Virtual Machine in Citrix XenServer. The VM has the xs-tools installed. Initially it said that it couldn't add so many disks. After I installed the xs-tools, it let me add all the disks. But /dev doesn't show all the disks. It shows these: /dev/xvda /dev/xvdb /dev/xvdc /dev/cdrom Perhaps it is bound by the limits of an IDE bus? (3 disks + CD-ROM) If so, how does one change the VM to use SCSI? Edit: According to the documentation: 2.6.3. VM Block Devices In the PV Linux case, block devices are passed through as PV devices. XenServer does not attempt to emulate SCSI or IDE, but instead provides a more suitable interface in the virtual environment in the form of xvd* devices. It is also possible to get an sd* device using the same mechanism, where the PV driver inside the VM takes over the SCSI device namespace. This is not desirable so it is best to use xvd* where possible for PV guests (this is the default for Debian and RHEL). For Windows or other fully virtualized guests, XenServer emulates an IDE bus in the form of an hd* device. When using Windows, installing the Citrix Tools for Virtual Machines installs a special PV driver that works in a similar way to Linux, except in the fully virtualized environment. Still, with 5 virtual disks attached, I don't see the other xvd devices. Edit #2: (attached requested info) Host Machine: XenServer 6.1 Linux version 2.6.32.43-0.4.1.xs1.6.10.777.170770xen (geeko@buildhost) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Wed Apr 17 05:52:03 EDT 2013 Guest Machine: CentOS release 6.4 (Final) Linux version 2.6.32-358.6.2.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 20:59:36 UTC 2013 Output of 'fdisk -l' on Guest Machine: Note, the disk beyond the first 3 attached are not displaying -- there should be 4 100GB disks. (There are a total of 5 disks displayed in XenCenter -- 16GB, 100GB, 100GB, 100GB, 100GB) Disk /dev/xvdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb6c95b9 Device Boot Start End Blocks Id System /dev/xvdb1 1 13054 104856223+ 83 Linux Disk /dev/xvda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e5f41 Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2089 16264192 8e Linux LVM Disk /dev/xvdc: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed249ced Device Boot Start End Blocks Id System /dev/xvdc1 1 13054 104856223+ 83 Linux Disk /dev/mapper/vg_blue-lv_root: 14.6 GB, 14571012096 bytes 255 heads, 63 sectors/track, 1771 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_blue-lv_swap: 2080 MB, 2080374784 bytes 255 heads, 63 sectors/track, 252 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 I see that the Linux versions say SMP. The Guest VM doesn't say "xen" in the name. However, I have already run yum install kernel-xen. Could be a clue?

    Read the article

  • Removing mdadm array and converting to regular disks while preserving data

    - by Jeffrey Kevin Pry
    I have a 6 disk (2TB each) mdadm RAID 5 volume created in Ubuntu 12.04 Server. However, I'm moving to a different solution and want to "unraid" my disks but keep the data. Only 50% is in use. From what I can surmise I basically have to do this recursively for each physical disk. Fail the disk Format the failed disk Move a portion of files to the new disk. Reshape the array Shrink the logical volume md0 This seems like a very time consuming process. Is there an easier way to do this (automatically perhaps) without buying new disks to temporarily hold the data? I am also aware that during this processing my RAID volume will be degraded and vulnerable the entire time. I am not too concerned about this and will be using battery backup and moving the most important files off first. Thank you for your help!

    Read the article

  • SEI Turns Software Architecture into a Game

    - by Bob Rhubart-Oracle
    "Architecture is the decisions that you wish you could get right early in a project." -- Ralph E. Johnson Unless you can see into the future, getting those decisions right comes down to a collection of hard choices. But the Software Engineering Institute (SEI) of Carnegie Mellon University has turned those hard choices into a game. Literally. According to the SEI website: The Hard Choices game is a simulation of the software development cycle meant to communicate the concepts of uncertainty, risk, options, and technical debt. In the quest to become market leader, players race to release a quality product to the marketplace. By the end of the game, everyone has experienced the implications of investing effort to gain an advantage or of paying a price to take shortcuts, as they employ design strategies in the face of uncertainty.   Check it out for yourself: Download the Hard Choices Board Game Download the companion white paper: The Hard Choices Game Explained

    Read the article

  • Why don't I have the option ''Install Ubuntu alongside with them''

    - by almqgh
    Why dont I have this option? here are my disk sudo fdisk -l Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5b53cc54 Device Boot Start End Blocks Id System /dev/sda1 * 2048 409599 203776 7 HPFS/NTFS/exFAT /dev/sda2 409600 1153767021 576678711 7 HPFS/NTFS/exFAT /dev/sda3 1216962560 1250050047 16543744 7 HPFS/NTFS/exFAT /dev/sda4 1250050048 1250261679 105816 c W95 FAT32 (LBA) Disk /dev/sdb: 4005 MB, 4005527552 bytes 32 heads, 63 sectors/track, 3880 cylinders, total 7823296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x20d8782d Device Boot Start End Blocks Id System /dev/sdb1 * 63 7822079 3911008+ c W95 FAT32 (LBA)

    Read the article

  • How do you deal with the details when reading code?

    - by upton
    After reading some projects, I find that it is not the architecture of the software that is really hard to know. It is not hard to figure out the architecture immediately if the project is clearly designed and implemented, if it's hard and never seen before, some day later I can find out some pattern similar to the one I read in the same domain. The difficulty is that the concepts and mechanism defined by the author are really hard to guess, and these concepts may spread in the whole project which makes it hard to grasp. The situation is normal and universal and you can ask questions to your colleague when in a company. However, it gets worse if nobody around you knows these details. How do you handle these details which block your reading?

    Read the article

  • Grub not showing installed kernels

    - by Markus
    Although I have several kernel versions in /boot and having them in my grub.cfg, they are not displayed in the grub boot menu. Running update-grub seems to work, as it puts the kernels in the grub.cfg in /boot/grub. Issueing it gives the following output: Generating grub.cfg ... cat: /boot/grub/video.lst: Datei oder Verzeichnis nicht gefunden Found linux image: /boot/vmlinuz-3.2.0-31-generic Found initrd image: /boot/initrd.img-3.2.0-31-generic /usr/sbin/grub-probe: Fehler: no such disk. /usr/sbin/grub-probe: Fehler: no such disk. Found linux image: /boot/vmlinuz-3.2.0-30-generic Found initrd image: /boot/initrd.img-3.2.0-30-generic /usr/sbin/grub-probe: Fehler: no such disk. /usr/sbin/grub-probe: Fehler: no such disk. /usr/sbin/grub-probe: Fehler: no such disk. done I don't know how to fix that problem. Reinstalling grub via live cd did not help.

    Read the article

  • What is the best freeware app for Windows to mount EXT4 partitions (from a GPT 4TB Disk) as RW, safely without corrupting the EXT4 partition?

    - by Bran
    My Computer is set up as Dualboot Windows and Ubuntu. I have 1 OS drive, and 1 hard drive with a /backup partition (which has all my family photos and data) and it is ext4. Also note... it is EXT4, and it is GPT, and it is 4TB size. Anyway, Windows can not mount the /backup ext4 partition. What software/freeware do you reccomend for Windows? Preferrably looking for someone with experiance using it for a long time and not have problems with EXT4 partition which is a 4TB, GPT Disk. Thank you for your advice and guidance! Always, appreciate everyones help at askubuntu, you guys are the best. Any ideas?

    Read the article

  • Can I create two databases, each for different directories?

    - by Tim
    I have run Recoll which created a database for my data partition on my internal hard drive. The database is stored under my home partition in the same internal hard drive. I now want to run Recoll to create a database for a dicrectory on my external hard drive, and store this new database on that external hard drive, because my internal hard drive doesn't have enough space to hold the new database. I was wondering how to do that in Recoll? Note: my current Recoll was installed from software center of Ubuntu 12.04. Thanks!

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • How can I export images from MS SQL Server to a file on disk?

    - by rball
    I have a User table that has all of their avatars saved in an image field. I'd like to just take that out of the database and store it as a regular file on disk. I looked around and saw some code for textcopy, but that doesn't seem to be on my machine for some reason. Here is the code I wrote up anyway. Anyone know a way to get this done? DECLARE @exec_str varchar (255) SELECT @exec_str = 'textcopy /S (local)\SQLEXPRESS' + --' /U ' + @login + --' /P ' + @password + ' /D thedatabase' + ' /T User'+ ' /C AvatarImage' + ' /F "d:\Avatars\' + User.Name + '.jpg"' + ' /O' FROM [User] WHERE UserID = 2 EXEC master..xp_cmdshell @exec_str

    Read the article

  • Can I get Raid disk status by using PS?

    - by David.Chu.ca
    I have a HP server with Raid 5. Port 0 and 1 are used for data & OS mirroring. The software come with the Raid 5 is Intel Matrix Storage Manager and there is manager console as windows based api to view all the ports, including their status. Now they are all in Normal status. I am not sure if the OS/Windows has some APIs or .Net classes to access raid ports and get their status? If so, how can I use PS to get the information? Do I have to reference to the dlls provided by the Intel Matrix Storage Manager if not? Basically, I would like to write a PS script to get read status. In case any of port disk is not normal, a message will be sent out by growl protocol.

    Read the article

  • How hard is it to modify the Django Models?

    - by alex
    I am doing geolocation, and Django does not have a PointField. So, I am forced to writing in RAW SQL. GeoDjango, the Django library, does not support the following query for MYSQL databases (can someone verify that for me?) cursor.execute("SELECT id FROM l_tag WHERE\ (GLength(LineStringFromWKB(LineString(asbinary(utm),asbinary(PointFromWKB(point(%s, %s)))))) < %s + accuracy + %s)\ I don't nkow why GeoDjango library cannot do this in MYSQL database. I hate writing RAW SQL for calculating distances between two points. Is there a way I can create my own library for Django that can handle this? If so, how hard is it?

    Read the article

  • How do I load the Oracle schema into memory instead of the hard drive?

    - by Andrew
    I have a certain web application that makes upwards of ~100 updates to an Oracle database in succession. This can take anywhere from 3-5 minutes, which sometimes causes the webpage to time out. A re-design of the application is scheduled soon but someone told me that there is a way to configure a "loader file" which loads the schema into memory and runs the transactions there instead of on the hard drive, supposedly improving speed by several orders of magnitude. I have tried to research this "loader file" but all I can find is information about the SQL* bulk data loader. Does anyone know what he's talking about? Is this really possible and is it a feasible quick fix or should I just wait until the application is re-designed?

    Read the article

  • How do I git reset --hard HEAD on Mercurial?

    - by obvio171
    I'm a Git user trying to use Mercurial. Here's what happened: I did a hg backout on a changeset I wanted to revert. That created a new head, so hg instructed me to merge (back to "default", I assume). After the merge, it told me I still had to commit. Then I noticed something I did wrong when resolving a conflict in the merge, and decided I wanted to have everything as before the hg backout, that is, I want this uncommited merge to go away. On Git this uncommited stuff would be in the index and I'd just do a git reset --hard HEAD to wipe it out but, from what I've read, the index doesn't exist on Mercurial. So how do I back out from this?

    Read the article

  • Is a class that is hard to unit test badly designed?

    - by Extrakun
    I am now doing unit testing on an application which was written over the year, before I started to do unit-testing diligently. I realized that the classes I wrote are hard to unit test, for the following reasons: Relies on loading data from database. Which means I have to setup a row in the table just to run the unit test (and I am not testing database capabilities). Requires a lot of other external classes just to get the class I am testing to its initial state. On the whole, there don't seem to be anything wrong with the design except that it is too tightly coupled (which by itself is a bad thing). I figure that if I have written automated test cases with each of the class, hence ensuring that I don't heap extra dependencies or coupling for the class to work, the class might be better designed. Does this reason holds water? What are your experiences?

    Read the article

  • Algorithm to classify a list of products?

    - by Martin
    I have a list representing products which are more or less the same. For instance, in the list below, they are all Seagate hard drives. Seagate Hard Drive 500Go Seagate Hard Drive 120Go for laptop Seagate Barracuda 7200.12 ST3500418AS 500GB 7200 RPM SATA 3.0Gb/s Hard Drive New and shinny 500Go hard drive from Seagate Seagate Barracuda 7200.12 Seagate FreeAgent Desk 500GB External Hard Drive Silver 7200RPM USB2.0 Retail For a human being, the hard drives 3 and 5 are the same. We could go a little bit further and suppose that the products 1, 3, 4 and 5 are the same and put in other categories the product 2 and 6. We have a huge list of products that I would like to classify. Does anybody have an idea of what would be the best algorithm to do such thing. Any suggestions? I though of a Bayesian classifier but I am not sure if it is the best choice. Any help would be appreciated! Thanks.

    Read the article

  • How can I export images from SQL Server to a file on disk?

    - by rball
    I have a User table that has all of their avatars saved in an image field. I'd like to just take that out of the database and store it as a regular file on disk. I looked around and saw some code for textcopy, but that doesn't seem to be on my machine for some reason. Here is the code I wrote up anyway. Anyone know a way to get this done? DECLARE @exec_str varchar (255) SELECT @exec_str = 'textcopy /S (local)\SQLEXPRESS' + --' /U ' + @login + --' /P ' + @password + ' /D thedatabase' + ' /T User'+ ' /C AvatarImage' + ' /F "d:\Avatars\' + User.Name + '.jpg"' + ' /O' FROM [User] WHERE UserID = 2 EXEC master..xp_cmdshell @exec_str

    Read the article

  • How to sanely read and dump structs to disk when some fields are pointers?

    - by bp
    Hello, I'm writing a FUSE plugin in C. I'm keeping track of data structures in the filesystem through structs like: typedef struct { block_number_t inode; filename_t filename; //char[SOME_SIZE] some_other_field_t other_field; } fs_directory_table_item_t; Obviously, I have to read (write) these structs from (to) disk at some point. I could treat the struct as a sequence of bytes and do something like this: read(disk_fd, directory_table_item, sizeof(fs_directory_table_item_t)); ...except that cannot possibly work as filename is actually a pointer to the char array. I'd really like to avoid having to write code like: read(disk_df, *directory_table_item.inode, sizeof(block_number_t)); read(disk_df, directory_table_item.filename, sizeof(filename_t)); read(disk_df, *directory_table_item.other_field, sizeof(some_other_field_t)); ...for each struct in the code, because I'd have to replicate code and changes in no less than three different places (definition, reading, writing). Any DRYer but still maintainable ideas?

    Read the article

  • Import module stored in a cStringIO data structure vs. physical disk file

    - by Malcolm
    Is there a way to import a Python module stored in a cStringIO data structure vs. physical disk file? It looks like "imp.load_compiled(name, pathname[, file])" is what I need, but the description of this method (and similar methods) has the following disclaimer: Quote: "The file argument is the byte-compiled code file, open for reading in binary mode, from the beginning. It must currently be a real file object, not a user-defined class emulating a file." [1] I tried using a cStringIO object vs. a real file object, but the help documentation is correct - only a real file object can be used. Any ideas on why these modules would impose such a restriction or is this just an historical artifact? Are there any techniques I can use to avoid this physical file requirement? Thanks, Malcolm [1] http://docs.python.org/library/imp.html#imp.load_module

    Read the article

  • I'm having a hard time wrapping my head around handling the Activity Lifecycle...

    - by kefs
    So it seems i've created a fatal flaw and coded an app before understanding/handling rotation/lifecycle events.. Currently, all of my code is in onCreate of each activity. I've read a lot of lifecycle tutorials online, including the official dev guide info, but i'm still having an almost unbelievably hard time trying to wrap my head around the rotation/lifecycle events/methods and when to use them correctly. For example, my app currently has an activity that opens the db, inserts a record, then closes the db.. if i rotate my screen on this activity, the data is re-entered into the db. Using the available lifecycle events (onPause(), onResume(), etc..), how would I prevent this db call from happening again? Would I have to pass a variable through the state saying that the db call has been done, and not to do it again? Thanks in advance..

    Read the article

  • All things equal what is the fastest way to output data to disk in C++?

    - by user260197
    I am running simulation code that is largely bound by CPU speed. I am not interested in pushing data in/out to a user interface, simply saving it to disk as it is computed. What would be the fastest solution that would reduce overhead? iostreams? printf? I have previously read that printf is faster. Will this depend on my code and is it impossible to get an answer without profiling? Edit: Output data needs to be in text format, whether tab or comma separated. This will require formatting, precision, etc. Running in Windows.

    Read the article

  • Can Eclipse not hard-code ECLIPSE_HOME when exporting build.xml?

    - by stevex
    I have an Eclipse project that I'm attempting to set up to build both with Eclipse and externally with Ant. It seems like a good way to do this is to have Eclipse generate a build.xml file that I can then use with ant. I'd like to set it up so the build.xml can be regenerated from Eclipse whenever the need arises, which means no hand-editing the build.xml file. But Eclipse writes one entry in there that has a hard-coded path to a directory on my computer, which makes it unsuitable for checking in to a source repository. Specifically it's this entry that's the trouble: <property name="ECLIPSE_HOME" value="D:/Eclipse/Eclipse Galileo (3.5) SR1"/> Is there some way to have Eclipse not output this line, or to make it a relative reference or something that makes sense to check in?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >