Search Results

Search found 13047 results on 522 pages for 'np hard'.

Page 406/522 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Why does the file date always change to the current date?

    - by Marshall
    We are a programming shop, but this i snot a programming question. My boss has put an external HD on the network. It contains the 'home' folders for users on the network. He uses it to place VB projects that he wants me to work on. But no matter what date and time he places a project on the drive, the file dates(modified) always shows the current date, though nothing in the files have changed. It makes it very hard to confirm that he has given me the latest versions. (He is not a fan of version control and nothing I do will convince him different.) Any ideas why this happens and how to prevent it from happening? P.S. As I wrote this I decided to add the last accessed date to the file display, and those dates happen to show the dates I expect to see. Why is the modified date getting changed, but not the accessed date. Does the accessed date change only when the files are opened or read, changed or not? Note: I use Directory Opus 9, a replacement for windows file browser. Thanks, Marshall

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • Dual boot nt4 and windows 98

    - by ItFinallyWorks
    I am trying to dual boot nt4 and windows 98 se (don't laugh - old computer). I have seen Microsoft's instructions for doing this, but it limits windows 98 to have a Fat16 partition (NT4's NTLDR doesn't understand FAT32) and therefore only 2GB of disk space. I really need it to have more than that. I started with Win 98 (on the 1st partition), repartitioned the disk, then added NT4 on the 2nd partition. NT4 took over the bootloader (as expected), so NT4 boots, but Win 98 doesn't. Right now I am working in VMWare so I can use nonpersistent hard drives (IDE like the real computer) to recover from errors easily. I've tried using XPs NTLDR using the instructions here: http://www.nu2.nu/fixnt4/ , but I got weird errors from NT4 and it never really worked. If XP's NTLDR would work, that should be able to boot both OSes. I've also tried using GRUB. In theory that should work. In fact when booting from super grub disk, it does. But as soon as I install grub to disk, Win 98 boots, but NT 4 blue screens at boot with a 0x0000007b inaccessible_boot_device error (that can be alot of things see MS kb 822051). The incantation I'm using for GRUB 1 is rootnoverify (hd0,1) makeactive chainloader +1 boot So, anybody have some suggestions?

    Read the article

  • How to deal with lots of brackets in a formula?

    - by wenlibin02
    Say, I have a formula like this (in LaTeX or Maple or other text system): Result: ((6*(k2+k3))*A123*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*(exp(-k3*(k3^2*t-x)))^2+6*A12*(-k3+k2)*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*exp(-k3*(k3^2*t-x)))*(exp(-k2*(k2^2*t-x)))^2+(-(6*(-k3+k2))*A13*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*(exp(-k3*(k3^2*t-x)))^2-(6*(k2+k3))*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*exp(-k3*(k3^2*t-x)))*exp(-k2*(k2^2*t-x)) Note: the above formula is only one part of the result of a maple calculation, I just can't break them up because there are so many many terms. Apparently, It's very hard to read. What I want to do is to fold the matched brackets level by level. If all the brackets are folded, I can find out clearly how many terms there are. Then I can analyze from the top level to the details of every term. But I just don't know how to realize that. Maybe there are some existed software which can visualize this kind of complex formula. Any idea? P.S. I use Linux system. The open source alternatives are better.

    Read the article

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • Are Colocation Cross Connects Worth While

    - by SvrGuy
    We currently operate three clusters of collocated machines in different data centers. Recently, I became aware that our newest data center will offer to cross connect us to a bandwidth provider free of charge. In the past, I never really investigated a cross connect for bandwidth because I figured that the rates would be similar to what we are paying the colo now and that it would reduce our resiliency (because we would only be using one or two carriers for IP, where as the colo uses, say 8 different providers). Then I saw an ad for hurricane electric internet services (http://he.net/cgi-bin/ip_transit_quote) that gave a price for IP transit at $1/Mbs, which is much better than the $30/Mb we pay for the blended bandwidth. What are people out there typically paying for bandwith via cross connect and how hard is to setup? Is my understanding that what you do is open agreemetns with two or three ISPs, cross connect to them and then configure your top of rack router on their network. Can you really get IP transit down to a couple of dollars per megabit per month just by doing the routing yourself? Or, is my understanding of cross connection fundamentally wrong?

    Read the article

  • What parts should I get for an ASRock x58 Extreme motherboard

    - by Brad Gilbert
    I just received an ASRock x58 Extreme motherboard, for my post on this question. It was a 2009 Tom's Hardware recommended buy. It is a Core i7 motherboard, with an X58 Express Chipset. It uses DDR3 RAM. What I want to know is, what parts should I get to finish it off. I'm looking for some good bargains, because of a lack of funds. The most taxing game I will probably play on it is OpenTTD. The only parts I currently have that are compatible: A Dynex 400W power supply. It appears to be an ATX 2.1 power supply, with the addition of a -5 rail. Apparently designed to be compatible with most ATX-style motherboards. Several PCI add-in cards. Mostly 10/100 Network cards Some sound cards Some video cards with a VGA connector Plenty of PATA drives. 8 GB - 80 GB Hard-drives A dozen or-so CD-ROM drives, only a handful of them are CD-RW drives. One DVD-ROM drive I have one LCD, with a 15 pin VGA connector, which I salvaged from the dump. The only thing wrong with it was some dead capacitors. It also has a stuck pixel.

    Read the article

  • Tablet as Car Computer

    - by luxurychair
    Okay, so forward this off to the right place if this isn't the right place to ask this question. I want to use a tablet computer as a car-computer. Minimum features would be to run my music (through iPod, Pandora, whatever I want) and GPS Navigation, watch TV or movies while I'm parked waiting for people, and the hard one: it needs to answer my phone calls with a pleasant interface much like in-dash systems do. It needs to detect that my phone is ringing in my pocket and provide an on-screen answer/ignore and then route the audio through the cars speakers. It would be nice to dial out and have address book access, but that is somewhat secondary. I have an iPhone myself and I figured that an iPad with 3G might make a good system for this - but I'm open to other options if an iPad can't do everything I need. I'm willing to write code, and I'm willing to jailbreak one or both devices. I haven't done much work in Obj-C, but I'm not opposed to learning a new language for this project. It's self enrichment for the most part, as I'm sure I can buy an indash entertainment system for less. Does anyone have experience with the iPhone/iPad SDK that can tell me whether or not it would be possible to get it an iPad to answer my calls in the car? What about an Android tablet? (that Adam looks promising, too).

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • Creating Windows 8.1 system image error

    - by Random
    I'm experiencing "not enough space" error when trying to create system image to a USB hard drive: Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. Blah, blah... There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backup up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and volumes included in the backup. Minimum requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space. Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte. ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. I'd tried both - PowerShell wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet and via Control Panel - File History button Clearly both EFI and Windows Recovery Environment partitions don't meet requirements coming from System Image tool (pic below) On top of that all system partitions are now shown as 100% free in Disk Management, it's disturbing but far from the actual state. My question is - hot to create System Image in Windows 8.1?

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • Windows 7 wireless not seeing any networks

    - by jkohlhepp
    I think I have managed to confuse Windows 7. When I did the install, I had the network cable plugged in to my router, but the wireless card was also enabled. During the install, Windows 7 seemed to see my wireless network and even asked me for the WEP key. I know that it used the WEP key because I initially entered an invalid one and it gave me an error. Then the network said "SoAndSoWireless Connected". However, when I unplug or disable my wired network card, then I have no internet, and it can't see any networks. When I plug in the wired network card, it says "SoAndSoWireless Connected". Under Network and Internet Network Connections I have "Local Area Connection" and "Wireless Network Connection". The wired one's status is "SoAndSoWireless" and the wireless status is "Not Connected". Also, the wireless connection can't seem to see any other wireless networks in the area and I know there are tons. My neighbors have several. I've somehow seemingly confused Windows 7 into thinking that my wired network card is my wireless card or something. Any ideas on how to un-confuse it? This is a desktop machine by the way, if it matters. EDIT: Ah, I think part of the problem is that I named my network accidentally the same as the name of the wireless network being broadcast by the wireless router. So that might be why it says that name on the hard-wired connection. Perhaps the drivers just are completely not working for the wireless card. Thanks, ~ Justin

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • Change which server mailbox is associated with in Exchange 2007

    - by tacos_tacos_tacos
    I have restored and mounted an EDB file onto a new Exchange 2007 Server. However, the old server is still online and although all the mailboxes I need are in the newly-mounted database, in Exchange 2007 System Manager it still shows that the mailbox is associated with the old server. If I try to "Move" the database it actually tries to copy the files from the old server to the new server, which is not necessary because they are already there - and produces and error about the mailbox on the destination already existing. How can I simply tell Exchange (AD?) to use the new server to find the mailbox rather than the old? Edit: I did the restore by taking the old server offline (turning off all Exchange services), copying EDB file to the new server, restoring it with eseutil, and mounting it to the new server. I did it this way in part because I didn't know a better way and in part because I couldn't use move-mailbox as the source location had a horrible Internet connection (which is why Exchange is being moved to the new location). I had to copy the EDB from the old server to a hard disk, go somewhere with a better Internet connection, upload the EDB to the new server.

    Read the article

  • NAT cause huge External (actually internal) bandwidth usage

    - by user67953
    We have 4 servers running in a data center, with internal IP: 192.168.3.* assigned. A hardware (FORTIGATE) firewall configured NAT, and it will lead the traffic as: external IP: 111.222.333.10 -> 192.168.3.10 www.server1.com 111.222.333.11 -> 192.168.3.11 www.server2.com 111.222.333.12 -> 192.168.3.12 www.server3.com In DNS, we have www.server1.com A 111.222.333.10 Now if I send a lot of data to www.server1.com from www.server2.com, the data will be send through 111.222.333.10 (external IP) and this cause our bandwidth usage huge (expensive!). The work around I have is to add a local host mapping to server2: 192.168.3.10 www.server1.com. That way when send files from server2 to www.server1.com, it will be internal. However, we are having more and more servers, it would be hard to manually add mapping to every server. Just wondering do we have another solution for this? Can we do something in the FORTIGATE firewall? ps. The DNS server being used is public, such as opendns, Google dns etc.

    Read the article

  • New Windows Server 2008 R2 WIMP running slower than Windows Server 2003

    - by starshine531
    We recently upgraded a WIMP server from Windows Server 2003 (32 bit) to Windows Server 2008 R2 (64 bit). The new server has significantly better hardware than the old server, yet many processes take much longer than the old box. We have a rather complex web application process that normally takes about 7 seconds on the old box, but on the new one it takes 11-12 seconds. That's down from 15.5 seconds it took before I disabled IPV6. This process involves some queries (some of them involve transactions with maybe 3 queries between the start and commit) and creating and emailing some pdfs. Windows updates are current with a more or less fresh machine. This happens consistently even when we have almost no traffic on the site and memory and cpu aren't being hard pressed at all. The only differences between the servers other than the OS and hardware: 1) When available, we used 64 bit versions of programs 2) The new server uses MySQL 5.5 rather than MySQL 5.1 (I did run the mysql_upgrade program and we use InnoDB for the engine) 3) The new server uses PHP Version 5.3.18 rather than PHP Version 5.3.1 4) With the new OS came IIS7 rather than IIS6 of course. What could be causing better hardware to run so much slower? Let me know if you need more details. Thank you.

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Get Python to raise MemoryError instead of eating all my disk space

    - by asmeurer
    If I run a Python program with a memory leak, I would normally expect the program to eventually die with MemoryError. But instead, what happens is that all the virtual memory is used until my disk runs out of space. I am running Mac OS X 10.8 on a retina MacBook Pro. My computer generally has between 10GB to 20GB free. Mac OS X is smart enough to not die completely when the disk runs out of space (rather, it gives me a dialog letting me force quit my GUI programs). Is there a way to make Python just die when it runs out of real memory, or some reasonable amount of virtual memory? This is what happens on Linux, as far as I can tell. I guess Mac OS X is more generous than Linux with virtual memory (the fact that I have an SSD might be part of this; I don't know just how smart OS X is with this stuff). Maybe there's a way to tell the Mac OS X kernel to never use so much virtual memory that leaves less than, say, 5 GB free on the hard drive?

    Read the article

  • Enabling AHCI in BIOS for SSD

    - by Robert
    I am trying to help a friend with a desktop upgrade. It is an old machine with an Intel DG31 main board. The board has 1 IDE port to which a DVD-ROM drive is connected, and 2 SATA ports. 1 SATA port had a hard drive with XP on it. I have made that the secondary drive now and wiped the OS as requested, so it is just for data. The new SSD has been installed but I read that for best results one must enable AHCI in the BIOS? So I checked and in the BIOS there is a SATA Mode setting with 2 options - Native and Legacy. I think Native means AHCI? After setting to Native, I installed Windows 7 Home Premium and all the latest drivers from Intel's website and all Windows Updates. Now when I check Device Manager I see this: Also Microsoft says HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Msahci\Start and HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\IastorV\Start should have value 0 for AHCI but I see that the value is 3 for both. So does this mean that Native mode is not AHCI? Or Windows 7 ignored BIOS setting and installed in IDE mode, maybe because both cables are present? Please help me enable AHCI on this system. Thanks!

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • Give back full control to a user on a disk from another computer

    - by Foghorn
    I have my friend's hard drive mounted externally. After messing with the permissions with TAKEOWN so I could fix some viruses, I have full control over their drive. The problem is, now it's stuck in a "autochk not found" reboot sequence. I think the problem is that the boot sector is invisible to the drive now. So my question is, How can I use icacls to give back the full ownership, when the user I am giving it to is not on my machine? I ran the TAKEOWN command from my windows 7 laptop, their machine is a windows xp Professional with three partitions, I only altered the one that has the boot sector. Here is the permissions that icacls shows: (Where my computer is %System% my username is ME, and the drive is E:\ C:\Users\ME icacls E:\* E:\$RECYCLE.BIN %System%\ME:(OI)(CI)(F) Mandatory Label\Low Mandatory Level:(OI)(CI)(IO)(NW) E:\ALLDATAW %System%\ME:(I)(OI)(CI)(F) E:\alrt_200.data %System%\ME:(OI)(CI)(F) E:\AUTOEXEC.BAT %System%\ME:(OI)(CI)(F) E:\AZ Commercial %System%\ME:(I)(OI)(CI)(F) E:\boot.ini %System%\ME:(OI)(CI)(F) E:\Config.Msi %System%\ME:(I)(OI)(CI)(F) E:\CONFIG.SYS %System%\ME:(OI)(CI)(F) E:\Documents and Settings %System%\ME:(I)(OI)(CI)(F) E:\IO.SYS %System%\ME:(OI)(CI)(F) E:\Mitchell1 %System%\ME:(I)(OI)(CI)(F) E:\MSDOS.SYS %System%\ME:(OI)(CI)(F) E:\MSOCache %System%\ME:(I)(OI)(CI)(F) E:\NTDClient.log %System%\ME:(OI)(CI)(F) E:\NTDETECT.COM %System%\ME:(OI)(CI)(F) E:\ntldr %System%\ME:(OI)(CI)(F) E:\pagefile.sys %System%\ME:(OI)(CI)(F) E:\Program Files %System%\ME:(I)(OI)(CI)(F) E:\RECYCLER %System%\ME:(I)(OI)(CI)(F) E:\RHDSetup.log %System%\ME:(OI)(CI)(F) E:\System Volume Information %System%\ME:(I)(OI)(CI)(F) E:\WINDOWS %System%\ME:(I)(OI)(CI)(F) Successfully processed 22 files; Failed processing 0 files C:\Users\ME

    Read the article

  • Windows Server 2008 R2 install reboots unexpectedly during "Completing installation" phase

    - by knda
    I am attempting to install Windows Server 2008 R2 onto a Cisco UCS C201 M2 rack mounted server but am having major difficulties and wondering if anyone has some insight or items they could recommend for me to look at to get this one resolved. Installation is being attempted via the Cisco remote console (using CIMC's Virtual dvd-rom).. following the first phase of Setup where the installation files are copied to the target hard drive, then a reboot occurs to load Setup from the HDD, mid-way in the "Completing Installation" phase the system then reboots unexpectedly. System configuration Cisco UCS C201 M2 (2RU rack mounted server) 16GB RAM, 2x 73GB 15K SAS, 4x 300GB 10k SAS Add-on cards - Intel quad-port GigE card (no fibre channel cards) Storage - LSI MegaRAID SAS 9261-8i. onboard SATA is disabled (no SATA drives connected) KVM - Belkin No physical DVD-ROM.. :( I have... Run memtest86+, no RAM faults Disabled/enabled SATA support (BIOS) Attempted install from USB DVD-ROM, no effect Attempted unattended install scripted via Cisco Configuration Manager DVD provided Removed Belkin KVM in case that was causing drama Discovered that the Cisco website is "awesome" for searching for PDFs/Drivers cough, reverted back to Google Downloaded latest LSI drivers from LSI's site and used during Server 2008 install checked Windows ISO against checksum's from MS site checked Windows ISO by using it for an install in a VM Running out of ways to troubleshoot this as I am not sure how to enable any sort of 'verbose' mode during the setup process. Next step I have planned is to remove the Intel NIC and try the installation again.. Edit: Problem was the "Cisco INTEL QUAD PT GBE" (1000/PT) .. will have to see if this card is faulty or if it's just drivers.. thanks for the help.

    Read the article

  • Exporting Environment Variables in Ubuntu Linux

    - by stanigator
    I know many people have asked about environment variables before, but I am having a hard time dealing with these paths while ensuring I don't mess around with the original settings. How would you go about executing these commands in Ubuntu in terms of environment variables? Thanks in advance! Please put /home/stanley/Downloads/ns-allinone-2.34/bin:/home/stanley/Downloads/ns-allinone-2.34/tcl8.4.18/unix:/home/stanley/Downloads/ns-allinone-2.34/tk8.4.18/unix into your PATH environment; so that you'll be able to run itm/tclsh/wish/xgraph. IMPORTANT NOTICES: (1) You MUST put /home/stanley/Downloads/ns-allinone-2.34/otcl-1.13, /home/stanley/Downloads/ns-allinone-2.34/lib, into your LD_LIBRARY_PATH environment variable. If it complains about X libraries, add path to your X libraries into LD_LIBRARY_PATH. If you are using csh, you can set it like: setenv LD_LIBRARY_PATH If you are using sh, you can set it like: export LD_LIBRARY_PATH= (2) You MUST put /home/stanley/Downloads/ns-allinone-2.34/tcl8.4.18/library into your TCL_LIBRARY environmental variable. Otherwise ns/nam will complain during startup.

    Read the article

  • Spurious alleged file corruption on Windows 7

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • What do different patterns mean in Windows 8 file copy dialog

    - by MainMa
    When copying or extracting files, Windows 8 shows the chart with the speed of the operation. I noticed several patterns: Randomness, High speed at the beginning, then low speed during the most part of the operation, Mostly constant speed. 1. Randomness/nice mountains. 2. High speed at the beginning, then low speed during the most part of the operation. 3. Low speed at the beginning, then high speed during the most part of the operation. (Similar to the previous image, but inverted) 3. Mostly constant speed. (Same as previous image, but without the fast start) I'm curious, what each of those patterns mean? Do some indicate that there may be a problem with hard disk performance? Why the nearly constant speed is so rare, even when copying a single large file from and to a spinning drive, or when copying a single large file or a bunch of small files from and to an SSD?

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >