Search Results

Search found 5849 results on 234 pages for 'partition scheme'.

Page 171/234 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • TrueCrypt: Open volume without mounting

    - by Totomobile
    I have a corrupt TrueCrypt volume. When I try to mount it, the password is fine but I get an error: hdiutil attach failed no mountable file systems. I just need to open it without TrueCrypt trying to mount it too, so I can use that partition in a data recovery program. Also it's just an image file volume. I have read the documentation here: http://www.truecrypt.org/docs/?s=command-line-usage But I can't figure out which switch I need to use to only open an image and not mount it. I am using the Mac version, and I have set up an alias for the TrueCrypt shell command, so I can just type: truecrypt -t -v - ?? [][]..

    Read the article

  • TrueCrypt: Open volume without mounting

    - by Totomobile
    I have a corrupt TrueCrypt volume. When I try to mount it, the password is fine but I get an error: hdiutil attach failed no mountable file systems I read a post that says you can try to recover the volume, but I have to open it first to try to recover it. I just need to open it without TrueCrypt trying to mount it too, so I can use that partition in a data recovery program. Also it's just an image file volume. I am using the Mac version, and I have setup an alias for the truecrypt shell command, but I'm not sure how to enter the syntax! Please help. thank you! T

    Read the article

  • TrueCrypt: Open volume without mounting

    - by Totomobile
    I have a corrupt TrueCrypt volume. When I try to mount it, the password is fine but I get an error: hdiutil attach failed no mountable file systems I read a post that says you can try to recover the volume, but I have to open it first to try to recover it. I just need to open it without TrueCrypt trying to mount it too, so I can use that partition in a data recovery program. Also it's just an image file volume. I am using the Mac version, and I have setup an alias for the truecrypt shell command, but I'm not sure how to enter the syntax! Please help. thank you! T

    Read the article

  • Lost all data on Windows XP after blue screen

    - by Barb
    I got a blue screen and was trying to boot with my OS disk. Frankly, I was unsure exactly how to do this. I was trying everything and booted in partition mode. Finally, I booted with disk and ran chkdsk /r and was able to log into Windows. But, all of my files and pictures are gone. I have no backup and all I'm sick to think that I lost the last seven years of pictures of my kids. What can I do?

    Read the article

  • SLES 11 - ocfs2 - Locking does not appear to work

    - by Autobyte
    Hi I have two SLES 11 servers that are SAN attached to a Clarion CX-340. The SAN partition has been formatted with ocfs2 and I have both machines setup in a cluster and the cluster is running (all appears to be normal). I have a small java application as a locking test and when I run the application on both machines at the same time, I should get the lock on one server and the other should refuse the lock since the first already holds a lock on that file but in this case both servers get a lock on the same file. Basically my cluster.conf looks like this: node: ip_port = 7777 ip_address = 192.168.10.121 number = 1 name = osrsles10node1 cluster = osrsles10 node: ip_port = 7777 ip_address = 192.168.10.122 number = 2 name = osrsles10node2 cluster = osrsles10 cluster: node_count = 2 name = osrsles10 Please ask for any other info - I really need these locks to be exclusive to each server. Thanks.

    Read the article

  • Permission denied when running Rails app in VirtualBox Ubuntu guest with files on Windows host

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared (Samba) from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents. But when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file) - why is that? Are there any major conceptual flaws with this approach? Would I be better off using the VBox "shared folders" function?

    Read the article

  • Apache reverse proxy: no protocol handler

    - by gonvaled
    I am trying to configure a reverse proxy with apache, but I am getting a No protocol handler was valid for the URL error, which I do not understand. This is the relevant configuration of apache: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ ProxyPassReverse /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ The requests is reaching apache as: POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1 And they should be forwarded to my internal service, located at: 0.0.0.0:8000/services/EchoService.py These are the logs: ==> /var/log/apache2/error.log <== [Wed Jun 20 02:05:20 2012] [debug] proxy_util.c(1506): [client 127.0.0.1] proxy: http: found worker http://localhost:8000/services/ for http://localhost:8000/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html [Wed Jun 20 02:05:20 2012] [debug] mod_proxy.c(998): Running scheme http handler (attempt 0) [Wed Jun 20 02:05:20 2012] [warn] proxy: No protocol handler was valid for the URL /gonvaled/examples/jsonrpc/output/services/EchoService.py. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. [Wed Jun 20 02:05:20 2012] [debug] mod_deflate.c(615): [client 127.0.0.1] Zlib: Compressed 614 to 373 : URL /gonvaled/examples/jsonrpc/output/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html ==> /var/log/apache2/access.log <== 127.0.0.1 - - [20/Jun/2012:02:05:20 +0200] "POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1" 500 598 "http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.162 Safari/535.19"

    Read the article

  • Grub install fails while installing Ubuntu on RAID

    - by Warren Pena
    I'm trying to install Ubuntu 9.10 using the alternate install CD, but I keep getting stuck. I get through the first few steps of the install process easily enough (telling it what partition to install to, what user ID and password to create, time zone, etc.), but then it suddenly pops up a menu asking me what the next step in the install process is. It has "Install the GRUB boot loader on a hard disk" selected by default. When I select it, it goes to another screen with a progress bar and a label "Installing the 'grub2' package." The progress bar gets to 16%, and then I get returned to the same menu. No matter how many times I try to install grub, the exact same thing happens. I'm trying to install Ubuntu on a two disk RAID-1 array. This is the RAID card I'm using: http://www.siig.com/ViewProduct.aspx?pn=SC-SAER12-S2. Any ideas what may be causing this to happen and how I can fix it? Thanks!

    Read the article

  • Restore XP on Acer Aspire One netbook

    - by Imran
    I have an Acer Aspire One D250 netbook which came with Windows XP (but no CD) on which I have since installed Xubuntu 9.10. Now I am trying to sell it, but I cannot find a way to recover XP. I have read in a lot of different places that holding Alt+F10 during boot should send me to a recovery menu (which will allow me to restore XP from a "secret partition"), but I have tried many times to no avail. The best I can do is get the BIOS setup screen by holding F2, but there doesn't seem to be any recovery option there. After the initial option to go into the BIOS setup GRUB starts loading and there don't seem to be any more opportunities to enter a system setup screen. Please help!

    Read the article

  • Horizontal scrolling not working in Windows 7 running on MBP using Boot Camp

    - by Rubicon
    Is there a way to enable horizontal scrolling on Windows installed on a bootcamp partition of MBP 13" Core 2 Duo, using a trackpad? I had a look into the Boot Camp Control panel settings, but could not find a setting that suggested this. I used the Boot Camp drivers that came with the MBP in the Mac OS X Install disk. The vertical scroll is working fine, and the horizontal scroll works fine in the Mac world of things, so the hardware is fine. I think maybe there might be an additional install for a driver that we may have to install? Or any update? Thanks in advance for all your help. :)

    Read the article

  • Slow NFS transfer performance of small files

    - by Arie K
    I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd: $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files (.php and .jpg, around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?

    Read the article

  • Path of md device wrong after reboot

    - by flammi88
    I have to set up a software raid (level1) on a Ubuntu server 12.04. It should serve files in the network via Samba. The server has the following disks: 250gb Sata hdd (Ubuntu is installed on that drive) 2 TB Sata hdd (first disk in raid array, data disk) 2 TB Sata hdd (second data disk) I created one partition on every data disk with the type Linux raid autodetect. In the second step I created the raid1 with the following command: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 After that, I added the array to the mdconf: mdadm --examine --scan >> /etc/mdadm/mdadm.conf The problem is: After a reboot the array is not available on the path /dev/md0. Instead of that it gets reassembled as /dev/md/0 but it is not very reliable. Has anybody a solution for this issue?

    Read the article

  • MySQL installation question.

    - by srtriage
    I am far from a DBA and have a question. Recently I installed MySQL. On my machine C:\ is a 50GB partition of two mirrored 10k SAS drives. The remaining space on those drives is allocated to D:. I also have a SSD mounted as E:. When I installed MySQL, I installed it to E:\ assuming that that is where the database information would be held since I had installed it there. I am now seeing C:\ProgramData\MySQL\MySQL Server 5.1\data\peq, peq being the name of my main database. Is my database being stored in C:\ and if so, how do I fix it to store the DB on the SSD?

    Read the article

  • Installing Windows XP sp3 into USB 2.0 WD 320Gb Hard Drive

    - by NetKabuki
    I have a HP laptop with support for 3 USB 2.0 ports. I also have a clean 320Gb WD USB 2.0 drive. The HP can boot from USB (Bios options). I used the install disk (XP SP3 bootable) and after a few stutters, was actually able to load up a partition on the 320Gb drive with Windows. I cannot consistently get the system to boot up off the USB drive. I am able to drop Ubuntu on that same WD drive and boot up. I can even get Grub2 on Ubuntu to recognize the Win OS. But booting the Win OS is an impossible task. What can I do differently - if anything?

    Read the article

  • RAID FS detection at boot time

    - by alex
    An excerpt from dmesg: md: Autodetecting RAID arrays. md: Scanned 2 and added 2 devices. md: autorun ... md: considering sdb1 ... md: adding sdb1 ... md: adding sda1 ... md: created md1 md: bind<sda1> md: bind<sdb1> md: running: <sdb1><sda1> raid1: raid set md1 active with 2 out of 2 mirrors md1: detected capacity change from 0 to 1500299198464 md: ... autorun DONE. md1: unknown partition table EXT3-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT2-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT4-fs (md1): mounted filesystem with ordered data mode Is it OK that kernel tries to mount an ext4 raid as ext3, ext2 first? Is there a way to tell it to skip those two steps? Just in case: /dev/md1 / ext4 noatime 0 1 TIA.

    Read the article

  • Adding Windows 7 to grub4dos menu.lst

    - by antonio
    I am trying to create a multiboot USB drive with grub4dos. I started with a working bootable WinPE-like USB drive, based on Windows 7. I modified the drive MBR with grubinst.exe (hd1), copied in its root grldr and the menu menu.lst file: color blue/green yellow/red white/magenta white/magenta timeout 30 default 0 title Win 7 test rootnoverify (hd0, 0) chainloader /bootmgr I get the error: Try (hd0, 0). This partition is ntfs but with unknown boot record Try (hd0, 1) ... ... Cannot find GRLDR. If I hit a key, anyway it boots Windows 7. I would like to drop to the GRUB command shell, but when I hit "c" Grub boots into Windows.

    Read the article

  • Recover Windows 7 from Linux Ubuntu?

    - by macha
    I have two partitions and have Linux Ubuntu running on one partition and Windows 7 running on the other. Now when I try to boot from Windows 7, I get an error saying the /system32/winload.exe file is corrupted or deleted. Now I have Windows 7 files on my system but do not have the DVD, I have made USB bootable with Windows 7, but when I boot it with the USB stick, a blue screen is coming on with weird errors messages. Now I am trying to restore my Windows instance to a restore point where it can work normally. How can I do that in my situation?

    Read the article

  • route propogation using OSPF in a network

    - by liv2hak
    I am using Juniper J-series routers to emulate a small telco and VPN customer.The internal routing will be configured with OSPF,MPLS including a default and backup path,RSVP for distributing labels withing the telco,OSPF for distributing routes from the customer edge (CE) routers to the VRF's in the adjacent PE's and finally iBGP for distributing customer routes between VRF's in different PEs. The topology of the network is shown below. The Addressing scheme for the network is as follows. UOW-TAU ******* ge-0/0/0 192.168.3.1 TAU-PE1 ******* ge-0/0/0 10.0.1.0 ge-0/0/1 10.0.2.0 ge-0/0/2 192.168.3.2 TAU-P1 ****** ge-0/0/0 172.16.1.0 ge-0/0/1 172.16.3.1 ge-0/0/2 10.0.2.2 HAM-P1 ****** ge-0/0/0 172.16.3.2 ge-0/0/1 172.16.2.1 ge-0/0/3 10.0.3.2 ACK-P1 ****** ge-0/0/0 172.16.1.2 ge-0/0/2 172.16.2.2 ge-0/0/3 10.0.1.2 HAM-PE1 ******* ge-0/0/0 10.0.3.1 ge-0/0/2 192.168.4.2 UOW-HAM ******* ge-0/0/0 192.168.4.1 I also set up loopback address for each node. I want to setup OSPF so that path to each internal subnet and router loopback address is propogated to all PE and P nodes.I also want to select a single area for PE and P nodes,and on each node I should add each interface that should be propogated. How do I accomplish this.? With my understanding below is the procedure to achieve this.Is the below explanation correct? I set up OSPF on UOW-TAU ge-0/0/0 interface and ge-0/0/1 interface and UOW-HAM ge-0/0/0 interface and ge-0/0/1 interface. let me call this Area 100. Once I have done this I should be able to reach each node from others using ping and traceroute. Any help is highly appreciated.

    Read the article

  • IIS7 Compression CSS files only compressed when dynamic compression is enabled

    - by Paul
    If anyone can help it would be appreciated. I would like to enable compression for static files within IIS7 (for the sake of simplicity I'll just refer to static css files for the time being). The problem I'm getting is that css files are only compressed when both dynamic and static compression is enabled in IIS for the website. What I really want to achieve is css compression (static file) whilst leaving the dynamic (aspx) pages as uncompressed for the time being (to avoid unnecessary CPU load). I am puzzled as to why just leaving 'static compression' enabled causes css files to be returned uncompressed. My applicationHost.config file has not be altered and looks like this: <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> </httpCompression> The server-wide compression setting within IIS is set to 'Dynamic Disabled' and 'Static Enabled' from the Server Features Compression page. The web-site compression setting (Server Sites MyWebsite Features Compression) is where I am enabling and disabling dynamic compression as detailed above. Any help would be really help me get unstuck on this. Thanks

    Read the article

  • Nginx all subdomain points to one subdomain (gitlab) rule

    - by Alkimake
    I have installed gitlab on my server and use nginx as http server... I simply used recipe for gitlab on nginx # GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen 192.168.250.81:80; # e.g., listen 192.168.1.1:80; server_name gitlab.xxx.com; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } gitlab.xxx.com works fine and i get gitlab web documents. But if i want another subdomain i use for Jira (jira.xxx.com) on port 80 (i setup jira on 8080 port normally) gets gitlab web site also. How can i restrict this rule only serving for gitlab, or may be i can redirect jira.xxx.com to jira.xxx.com:8080

    Read the article

  • page allocation failure - am I running out of memory?

    - by mfriedman
    Lately I've noticing entries like this one in the kern.log of one of my servers: Feb 16 00:24:05 aramis kernel: swapper: page allocation failure. order:0, mode:0x20 This is what I'd like to know: What exactly does that message mean? Is my server running out of memory? The swap usage is quite low (less than 10%), and so far I haven't noticed any processes being killed because of lack of memory. Additional information: The server is a Xen instance (DomU) running Debian 6.0 It has 512 MB of RAM and a 512 MB swap partition CPU load inside the virtual machine shows an average of 0.25

    Read the article

  • How to (hardware) RAID 10 on Ubuntu 10.04 LTS with 4 drives and motherboard with RAID contoller

    - by lollercoaster
    I have 4 500GB hard drives. I set up a RAID 10 in BIOS, much like shown here: http://www.supermicro.com/manuals/other/RAID_SATA_ESB2.pdf Then I followed these instructions: http://www.unrest.ca/Knowledge-Base/configuring-mdadm-raid10-for-ubuntu-910 Basically I cannot get it to work. I go through the instructions when I get to the "partition" section of the install, creating 4 RAID 1's (2 partitions on each drive, one for primary and one for swap space), then combining to make a RAID 10. Unfortunately it still shows 2 partitions, one 500 GB and another being 36GB for some reason. Any ideas? I think best would be if anyone had found good instructions (step by step) for how to do this...I've been googling for hours and haven't found anything...

    Read the article

  • Recursive reset file permissions on Windows

    - by Peter Horvath
    There is a big, complex directory structure on a relative big NTFS partition. Somebody managed to put very bad security privileges onto it - there are directories with randomly given/denied permissions, etc. I already run into permission bugs multiple times, and I found insecure permission settings multiple times (for example, write permissions for "Everyone", or false owners). I don't have time to check everything by hand (it is big). But luckily, my wishes are very simple. The most common: read/write/execute on anything for me, and maybe read for Everyone. Is it possible to somehow remove all security data from a directory and giving my (simple) wishes to overwrite everything there? On Unix, I used a chown -R ..., chmod -R ... command sequence. What is its equivalent on Windows?

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • Why Both 8GB USB Flash Drives Have Different Integrities?

    - by Boris_yo
    USB 3.0 SuperTalent Express DUO 8GB recently had its partition corrupted and declared itself "write-protected" and I was told in chat by @sidran32 that this usually means that flash drive gone bad due to writing cycles limit being reached. Having this thumbdrive for over a year being used infrequently, I was in doubt and referred to SuperTalent's support. I was given recovery tool which I executed but it failed first time prompting me to reinsert it. After that, I formatted it with Windows 7 integrated format utility since recovery tool offered to do this as well which was successful. The problem as I have noticed is with integrity of SuperTalent: Compare above to SanDisk's Micro Cruzer 8GB: Am I missing something? Both thumbdrives are of 8GB and have same FAT32 file system.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >