Search Results

Search found 25718 results on 1029 pages for 'external hard drive'.

Page 79/1029 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Windows XP won't boot after drive transplant

    - by Nathan
    Hi all, I moved my hard drive from my Lenovo laptop into my Asus Eee PC netbook. When I started the netbook, after POST all I got was a black screen with a cursor in the upper left corner. I thought that the migration should work OK because this was a 32-bit version of Windows XP, and the Atom processor in the Asus should support the x86 instruction set. However, I don't know much about Windows, so maybe this was a dumb thought. I did verify that the BIOS can find the drive. It required major surgery to replace the drive, so any solution requiring me to remove the transplant drive is not going to fly. Keeping in mind that the netbook has no optical drive and that I have no other Windows computers (all my other computers run Linux), is there any way I can fix this problem? Thanks! Nathan

    Read the article

  • Copying 500GB Data to EC2 Instances Local Drive

    - by iCode
    Please do not ask me why (they made me) but I have to copy 500GB data to the local drive every 200 node/instances that I am launching in EC2. For reasons beyond this post, this data must by on the local drive and not EBS drive so I can not benefit from snapshots. What is the fastest way that I can manage to this? Copying from S3 to each node takes a long time. I trying to attached an EBS volume to every node with the data and then copy the data from EBS to the local drive but that also take a long time (several hours_) Now, I am also thinking to use bit torrent but not sure how well it is going to be. What is the best way to copy 500GB of static data to each local drive of 200 ec2 instances? The 500Gb of data is composed several hundred of file with varying size but the biggest file is 20GB.

    Read the article

  • Why is my drive so full on my Windows 2008 Server

    - by Zee Tee
    My server is Windows 2008 R2 Standard Server. I have a secondary SAS drive where all my website files are with the following properties: NTFS File System Allow files on this drive to have contents indexed in addition to file properties IS CHECKED Simple Layout Basic Type Healthy (Page File, Primary Partition) Status I have 3 folders on this drive: Folder 1: 4GB Folder 2: 2GB Folder 3: 20GB (These are the sizes of them when I click properties) But the drive says it only has 10GB left out of 65GB. Why? I'm trying to make more room on this drive.

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • Aperture and networked drive: thinks images are offline

    - by AK
    I keep my photos on a networked drive (wireless), and they are referenced by Aperture. Aperture seems to not try very hard to look for this drive. Usually I have to open the network drive in Finder before opening Aperture -- otherwise it doesn't find the images and considers them offline. To me this seems like Aperture isn't willing to look on the network for the drive, unless it was helped along by pointing it out in Finder. It doesn't work if I start Aperture first and then navigate to the drive in Finder. What are some workarounds to making this functional? Is there a way to tell Aperture to look again without restarting the program?

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

  • Clone a 2TB WD Green internal drive with bad sectors to a 3TB partitioned external

    - by ron
    I have a 2TB WD Black drive and would like to simply do a straight clone from a failing 3TB drive to it. Both are SATA. Will I be able to just install the new drive alongside the faulty one and then do the clone/rescue attempt with ddrescue or is there a better method? The faulty internal drive mentioned has bed sectors although am usually able to boot into Windows 7 Ultimate with it and navigate and access all my programs. I have been attempting some trials with an Ubuntu Live CD using ddrescue but am not sure I'm doing it right. I have a 3TB WD my book essential external which is GPT and have created a separate 2TB partition on it which I am trying to clone to. I assume I need to format the new drive first to NTFS? Can I do that via the Ubuntu Rescue Remix 12-04 live DVD that I've been booting with?

    Read the article

  • Map a drive to root of a server (\\sever) in Vista

    - by Andy T
    Hi, In Win XP, I can very easily map a network drive to the root of my NAS server. I browse to it in Explorer (\192.168.1.70), choose "Map Network Drive", choose the drive letter, done. In Vista, this does not seem possible. I have to go "Map Network Drive" from 'Computer', then enter the address, but it will only let me map to specific shares (sub-folders off of the server root) and NOT to the server root share. Since my NAS has built-in shares (music, photo, video, etc.) then I would have to have drive letters for all of these, which I absolutely don't want. Can anyone tell me - how come I can easily map to the server root from XP, but not in Vista? Is there something fundamentally different in the networking across the two OS's? Or do I just need to do things a different way? Hope someone can help. Thanks, AT

    Read the article

  • Is it normal for a SAS drive to have a few bad blocks, or should I replace my drive ASAP?

    - by Nate
    I have a drive—part of a RAID 1 mirror—that has two bad blocks. Adaptec Storage Manger e-mailed me when it detected the blocks. It shows 4 medium errors for that drive, but state is still “optimal”. This is my first time using Adaptec RAID controllers. I don’t know if an occasional bad block is normal, or if I should immediately replace that drive. Update: The drive failed later the same day! The disk subsystem is: Adaptec 6405 with ZMM (2) Seagate near-line SAS drives (ST31000424SS) The other drive hasn’t reported any bad blocks yet. I am running a consistency check.

    Read the article

  • USB Hardware vs. Software Write Lock

    - by TreyK
    I'm in the market for a USB flash drive, and remember this cool feature a tiny 32MB flash drive of mine had: a write lock switch. This seemed like it would be an amazing feature to have as a shield against any nastiness happening to the drive on an unfamiliar computer. However, very few drives on the market offer this feature. Instead, it seems that forms of software protection are the more prominent method. This software protection causes me a bit of uneasiness, as it seems like this software wouldn't be nearly as bulletproof as a physical switch. Also, levels of protection seem to vary from product to product. Being able to protect certain folders from reading and/or writing would be nice, but is the security trade-off worth it? Just how effective can this software protection be? Wouldn't a simple format be able to clean any drive with software protection? My drive must also be compatible with Windows XP, Vista, and 7, as well as Linux and Mac. What would be the best way forward for getting a well-sized (~8GB) flash drive with a strong write protection implementation, for little or no more than a regular drive? Thanks.

    Read the article

  • Boot failure on installation from a burned iso image

    - by jdamae
    I'm encountering boot failure while trying to install a Linux distro from a CD. I'm using an older PC; here are its specs: HP Pavilion a255c 2.66GHz CPU, 512MB RAM with a BIOS revision of 6/30/2003 I reclaimed an older drive (Seagate ST340810A) that seems to be working, as it's recognized in the BIOS (auto-detected). So this is not the original HDD, but a replacement. I downloaded a mini.iso of Ubuntu 10.10 that I want to install, and burned the image to a CD for install. My boot sequence is: First Boot Device [CDROM]. I disabled devices 2-4 so I can just force it to read first from the CD-ROM. This old PC also has a separate CD writer which is a Sec.Slave. The Sec.Master is the Toshiba DVD/ROM DSM-171 drive where I placed the burned Linux CD. With these settings I cannot get it to boot. I get the message "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER" when I start the pc with the cd (burned iso image). Would I be able to boot off a usb flash drive? Would that work?

    Read the article

  • DVD wont mount Ubuntu 12.04

    - by CyborgGold
    I can't seem to be able to mount my optical drive. I have tried numerous solutions from this site with no results. I am not able to see the device inside the file browser either. There is a DVD in the drive. I am running 12.04 on an HP g60-235dx portable. I have a link below to the specs. I will also list what I have tried (that I can find back right now.) I know the drive is functioning, because just before Windows 7 crashed and my MBR went fubar I was watching movies just fine. I am fairly new to linux, so don't assume I know anything. Ok, so here is what I have tried: sudo wget --output-document=/etc/apt/sources.list.d/medibuntu.list http://www.medibuntu.org/sources.list.d/$(lsb_release -cs).list sudo apt-get --quiet update sudo apt-get --yes --quiet --allow-unauthenticated install medibuntu-keyring sudo apt-get --quiet update sudo apt-get install libdvdcss2 dmesg | grep sr0 (no output) apt-get install libdvdnav4 (already installed, and up to date) sudo /usr/share/doc/libdvdread4/install-css.sh ls -l /dev/cdrom /dev/cdrw /dev/dvd /dev/dvdrw /dev/scd0 /dev/sr0 ls: cannot access /dev/scd0: No such file or directory lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/cdrom -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/cdrw -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/dvd -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/dvdrw -> sr0 brw-rw----+ 1 root cdrom 11, 0 Sep 10 03:51 /dev/sr0 wodim --devices wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg1' rwrw-- : 'TSSTcorp' 'CDDVDW TS-L633M' ------------------------------------------------------------------------- sudo lshw optical *-cdrom description: DVD-RAM writer product: CDDVDW TS-L633M vendor: TSSTcorp physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 0200 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc sudo lshw | grep cdrom *-cdrom logical name: /dev/cdrom Spec sheet for portable: http://www.cnet.com/laptops/hp-g60-235dx/4507-3121_7-33496192.html If you need any more information than all of that... please let me know.

    Read the article

  • LTO 2 tape performance in LTO 3 drive

    - by hmallett
    I have a pile of LTO 2 tapes, and both an LTO 2 drive (HP Ultrium 460e), and an autoloader with an LTO 3 drive in (Tandberg T24 autoloader, with a HP drive). Performance of the LTO 2 tapes in the LTO 2 drive is adequate and consistent. HP L&TT tells me that the tapes can be read and written at 64 MB/s, which seems in line with the performance specifications of the drive. When I perform a backup (over the network) using Symantec Backup Exec, I get about 1700 MB/min backup and verify speeds, which is slower, but still adequate. Performance of the LTO 2 tapes in the LTO 3 drive in the autoloader is a different story. HP L&TT tells me that the tapes can be read at 82 MB/s and written at 49 MB/s, which seems unusual at the write speed drop, but not the end of the world. When I perform a backup (over the network) using Symantec Backup Exec though, I get about 331 MB/min backup speed and 205 MB/min verify speeds, which is not only much slower, but also much slower for reads than for writes. Notes: The comparison testing was done on the same server, SCSI card and SCSI cable, with the same backup data set and the same tape each time. The tape and drives are error-free (according to HP L&TT and Backup Exec). The SCSI card is a U160 card, which is not normally recommended for LTO 3, but we're not writing to LTO 3 tapes at LTO 3 speeds, and a U320 SCSI card is not available to me at the moment. As I'm scratching my head to determine the reason for the performance drop, my first question is: While LTO drives can write to the previous generation LTO tapes, does doing so normally incur a performance penalty?

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • Access Samba Drive over SSH

    - by chrissygormley
    Hello, I am trying to access a samba drive over ssh. I have a windows machine with a samba drive to connect to my linux vm drive, I also run cygwin on the windows machine. What I am trying to do is from my linux vm ssh into the windows cygwin side and cd into the samba drive which connects back into my linux directory. When I am in cygwin I can see the drive as drive Z: but when I ssh into cygwin the Z: drive doesn't show up. Can anyone offer suggestions on how to get this working? Thanks

    Read the article

  • centos 6.3 kvm external ip forwarding to guests

    - by user1111702
    I have a centos 6.3 server with kvm installed. The server has 4 external ips and one NIC. 176.9.xxx.xx1 176.9.xxx.xx2 176.9.xxx.xx3 176.9.xxx.xx4 I use the following configuration ifcfg-eth0 as slave to ifcfg-br0 the configuration in ifcfg-eth0 is DEVICE=eth0 ONBOOT=yes BRIDGE=br0 HWADDR=14:da:e9:b3:8b:99 and in the ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static BROADCAST=176.9.xxx.xxx IPADDR=176.9.xxx.xx1 NETMASK=255.255.255.0 SCOPE="peer 176.9.xxx.xxx" and I have 3 more aliases for br0 , br0:1 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx2 NETMASK=255.255.255.248 ONBOOT=yes br0:2 to get the trafic from the third external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx3 NETMASK=255.255.255.248 ONBOOT=yes br0:3 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx4 NETMASK=255.255.255.248 ONBOOT=yes The above settings work fine and I recieve the trafic from all the external ips. My problem is that I want to pass the trafic from external ip to specific virtual guest on my server. ie trafic that comes from 176.9.xxx.xxx2 must pass to virtual machine 1 176.9.xxx.xxx3 must pass to virtual machine 2 176.9.xxx.xxx4 must pass to virtual machine 3 Can you please help me how to achieve this ? What are the settings on the host and what should I do to the guests. Thank you in advance

    Read the article

  • Excel macro to change external data query connections - e.g. point from one database to another

    - by Rory
    I'm looking for a macro/vbs to update all the external data query connections to point at a different server or database. This is a pain to do manually and in versions of Excel before 2007 it sometimes seems impossible to do manually. Anyone have a sample? I see there are different types of connections 'OLEDB' and 'ODBC', so I guess I need to deal with different formats of connection strings?

    Read the article

  • No-argument method on window.external is invoked when checking with typeof

    - by janko
    Hi, I am trying to display an HTML page with embedded JavaScript code inside a System.Windows.Forms.WebBrowser control. The JavaScript code is expected to interact with the embedding environment through the window.external object. Before invoking a method on window.external, JavaScript is supposed to check for the existance of the method. If it is not there, the code should invoke a generic fallback method. // basic idea if (typeof(window.external.MyMethod) != 'undefined') { window.external.MyMethod(args); } else { window.external.Generic("MyMethod", args); } However, checking for a no-argument method with typeof seems to invoke the method already. That is, if MyMethod accepts any positive number of arguments, the code above will work perfectly; but, if MyMethod is a no-argument method, then the expression typeof(window.external.MyMethod) will not check for its type but invoke it, too. Is there any work-around to this behavior? Can I somehow escape the expression window.external.MyMethod to prevent the method call from occurring?

    Read the article

  • Acer Aspire One and Kernel 2.6.35-25 Freeze

    - by Nerdfest
    I'm having a problem with an Acer Aspire One netbook after the latest kernel upgrade. Basically, doing anything relating to an external monitor locks the trackpad, and in some cases, the keyboard as well. This lock will continue in Gnome even after reboots, and requires battery removal to fix. It does work in the graphical login manager up until the problem occurs the first time. And ideas on settings, etc, that I can change to make it work again?

    Read the article

  • My Folders Become Hidden System Files And Access Denied?

    - by echolab
    I just aske a question in Superuser site , i have an external hdd which suddenly seems infected and one of my folders which contain my photos changed to something like this One of superuser guys suggest that i install an ubuntu and try to scan and change permissions , but i am not an ubuntu expert , and believe me if my problem solve i will switch to it ( cause all i do is writing and taking photos i am tired of malewares and etc ) Now i have ubuntu and fedora installed ( after a long read through guides ) , both of them show infected folder as empty ( in windows 7 i see this strange folders , you can see in picture )

    Read the article

  • Hardware problem

    - by Ajay0990
    Guys I need help to recover my external hard disk. Im using SEGATE FREEAGENT GO 320gb HDD. Recently I tried to format it using command line in win7, but accidentally I removed the hdd before the format is complete and I cannot open it and I tried to recover data using as many software's as I can but no use I have max of 25000 bad sectors. Can i still recover my hdd? Is there any way to recover my HDD with max bad sectors using Linux?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >