Search Results

Search found 11107 results on 445 pages for 'drive bay'.

Page 394/445 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Windows system restore deletes various executables and *.js files. How does it decide which files to delete?

    - by Leftium
    I restored my system from a Windows System Restore point. It solved some issues I was having, but introduced other strange problems (like my optical drive disappeared). One thing that surprised me was several files from my Web2Py installation were deleted: the executables and *.js files; possibly some others (like favicon.ico). I did not expect this because Web2Py is basically a portable, standalone application. You just unzip it and run the executable inside, so nothing should be registered with Windows. My question is: what files does Windows system restore delete, and how does it decide this? I'm just wondering what other files I'm missing and if there's a way to get restore them (without rolling back the restore point). Perhaps it scans for certain files types (like exe, js, ico, dll) with a creation date that was after the restore point creation date? Some other people who experienced a similar problem: Dropbox: Lost Files User files missing after run system restore. update: I found some more references on how Windows System Restore works: Understanding how System Restore in Windows Vista treats executable files Why Vista's System Restore is Dangerous and What to do About it

    Read the article

  • Install Ubuntu 10.10 from loopback mounted ISO image

    - by Zifre
    I have a laptop with a faulty BIOS that has stopped booting from CDs even though it supports it (and it doesn't support booting from USB drives). I am trying to install Ubuntu 10.10 on it. I already had 9.10 installed. I tried using Kexec, but it refused to accept the kernel image. Eventually I found this page which shows how to make GRUB 2 boot from an ISO file. That worked fine, and I am now running the live image from the file. (If I can get this to work, it will be my new preferred way of installing Ubuntu, as it saves CDs and boots much faster.) However, I can't install it. The installer won't make changes to the hard drive, because the partition containing the ISO is mounted (and can't be unmounted because it is in use). Even if I only choose to use other partitions that are not mounted, the installer refuses to go any farther. Clearly, it should be possible using other partitions on the same disk. Is there any way to work around this issue or force the installer to go ahead?

    Read the article

  • Windows Scheduled Startup Task doesn't appear to be fully working but why?

    - by Devtron
    I originally tried to use Group Policy to enforce a startup script to run at startup. My startup script is a .CMD file, which calls 10 .exe files. Using Group Policy I could never get this to work....so I looked into using Scheduled Tasks. And here I am. I have tried two different versions of my script (for syntax purposes). I originally thought my syntax could be bad, so I tried a few approaches. Neither work. My #1 .CMD file approach commands look similar to this: start "this is my title" /D "C:\Somepathhere\myExecutable.exe" "..\..\published\wc_task.wfc" My #2 .CMD file approach commands look similar to this (it invokes a shortcut file): rundll32 shell32.dll,ShellExec_RunDLL "C:\Somepathhere\bin\Virtual Workflow.lnk" ^ Both of these scripts work fine if I manually run them, either by running the .CMD file, or even by manually forcing the Schedule Task MSC console to "Run" this script. Manual process seems to work fine, but automated it does not. My scheduled task is set for startup and uses "highest privileges" to execute as Admin. At the end of my .CMD script, I added a line to write to a text file, just to prove that the script was being run. That command looks like this: echo foo > C:\foo.txt When I reboot my server, and Schedule Tasks kicks in, I never get my ten .EXE files to run, but I do get the C:\foo.txt on my drive. What gives?

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • Cannot login to Windows7 in normal mode but can in safe mode

    - by Guy
    I have a Windows 7 Ultimate computer (Shuttle) that I built myself and in it I put a Solid State Drive (SSD). It's been working well for a number of months but now when I start it there are problems. I have 2 users setup on the computer and when I try and sign in with either user it claims that the password is incorrect. I could understand the odd typo but I've had my wife try it as well and we've got the passwords correct. On top of that it will remain at the login screen for 1 minute and 20 seconds and then spontaneously reboot without shutting down. So I'm trying to work out if this is a hard disk problem or something else. Any ideas? (I have a nightly backup to a WHS so it will be easy to recover but I don't want to do that unless I have to and don't want to waste time putting in a new HD just to discover it's something else.) More info: If I start in Safe Mode I am able to login with the password and all appears as normal as it can in Safe Mode. However, normal boot continues with same problem.

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • Facebook Chat through XMPP protocol on Pidgin Portable - Will not Authorize

    - by Sara Neff
    I heard you can use facebook chat on desktops now. Thats awsome! What i didn't hear is that it is a pain in the butt! Not awsome! I've followed six nearly identical sets of instructions from six different websides, including the one that facebook generates for you, to get facebook chat connected through Pidgin. Its the latest portable version, so from what i hear the plugin is out of the question. Whenever I go to try and connect i get a message saying "Not Authorized" and buttons to either modify the account info, or retry. NOTHING i have done has fixed this, and I can't find anything remotely usefull anywhere. I am running windows xp, and running pidgin (portable) off of a flash drive. Someone please tell me what i have to do. I read about authorizing the chat on my actual facebook page. I'd have tried that if i could find out how to do it, but if its there they hid it good. HELP?!

    Read the article

  • Expected IOPS for log writing on PS6000X SAN?

    - by dssz
    Customer is experiencing poor Sybase ASE 15 performance on a PS6000X SAN with 16 X 450GB 10K in RAID-50. The server is a Dell R710 running 2003 server R2 64bit in ESX 4.0.0,256968 I've used sqlio to benchmark the sequential write performance of 4KB blocks on the drive. sqlio -kW -t1 -s600 -dE -o1 -fsequential -b4 -BH -LS sqliotestfile.dat Result is 1900 IOPS. However, when Sybase is running a sustained workload of small inserts SAN HQ shows a consistent 590 IOPS (and 100% 4K write activity). It also shows that the write latency increases to 1.2ms from <1ms. Monitoring and tests in Sybase demonstrate the performance problem is IO related and in particular there is a lot of wait time writing to the log. The SAN indicates that write caching is enabled. What IOPS should the SAN be capable of for 4k sequential write activity? Also, with write caching enabled, shouldn't the controller be batching up the 4K writes into something more efficient? Also, any tips on Sybase on ESX would be appreciated.

    Read the article

  • Variable size encrypted container

    - by Cray
    Is there an application similar to TrueCrypt, but the one that can make variable size containers opposed to fixed-size or only-growing-to-certain-amount containers which can be made by TrueCrypt? I want this container to be able to be mounted to a drive/folder, and the size of the outer container not be much different from the total size of all the files that I put into the mounted folder, while still providing strong encryption. If to put it in other words, I want a program like truecrypt, which not only automatically grows the container if I put in new files, but also decreases it's size if some files are deleted. I know there are some issues of course, and it would not work 100% as truecrypt, because it basically works on the sector level of the disk, giving all the filesystem-control to the OS, and so when I remove a file, it might as well be left there, or there might be some fragmentation issues that would stop just truncating the volume from working, but perhaps a program can be built in some other way? Instead of providing sector-level interface, it would provide filesystem-level interface? A filesystem inside a file which would support shrinking when files are deleted?

    Read the article

  • What does this ssh error mean?

    - by kevin
    This is my last resort. I've been trying to figure out the problem here for hours. Here's the deal: I have copied my private key from machine #1 onto machine #2. Machine #1 is able to connect via ssh to a server with my public key just fine, but machine #2 gives the following output, when trying to connect to the server: $ ssh -vvv -i /home/kevin/.ssh/kev_rsa [email protected] -p 22312 OpenSSH_5.3p1 Debian-3ubuntu6, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.244 [192.168.1.244] port 22312. debug1: Connection established. debug3: Not a RSA1 key file /home/kevin/.ssh/kev_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace ... Permission denied (publickey). There is obviously more debug output that I have omitted, and I can provide upon request. I am convinced however that it doesn't like my private key file. I also had a suspicion that it has to do with how I copied it from machine #1 to machine #2. I copy/pasted the text from the private key onto a flash drive. This might be the problem, however, when I duplicated this method on another working private key file, and did a diff on the original, to the copy/pasted one, they are identical. I've been struggling with this. If I could just get a little more information on why it doesn't like my key, I could fix it I'm sure. Anyone have any ideas on this? Is there some meta-data somewhere that tells ssh that a file is in fact an RSA key?

    Read the article

  • How can I do an SELINUX filesystem relabel without rebooting first?

    - by Skaperen
    I can touch the file /.autorelabel and reboot and during the initialization coming back up it will do the SELINUX relabel for me. But I want to do this in a different situation where the system has just been copied to a hard drive image. I can chroot to the originating file tree, or chroot to the just populated device image and run it. I just can't find anything that says what to be run. This image is being made into an AMI on AWS EC2, and contains CentOS 6.3. But the time it takes to relabel is too long (6 minutes or more). I want to move the relabel to the image build where the extra time is not an issue (because it happens once instead of every time an AMI is launched). I can make this relabel be the very last thing just before the filesystem is unmounted for the last time until it becomes an AMI and will launch. I just need to know what to call to do it. I have searched man pages with no luck. I have searched system init scripts but where /.autorelabel is detected, it is unclear what is happening. Documents like http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-fsrelabel.html only tell how to do things that still really do the work after a reboot. I need to have the work doing BEFORE the "reboot" (unmount, build AMI, and launch ready to go). The big point is ... yes there will be a reboot ... but I want the relabel work to be done before that so it won't be done every time an AMI is launched (because it takes so long).

    Read the article

  • How to I make my bootcamp partition bootable again?

    - by KJFMusic
    I'm having a similar problem as everyone else in this posting. I have 5 partitions. 3 of which I created for my Mac OS Lion installation, Windows 7 installation and a 3rd for storage. Everything was running fine for quite sometime until recently. My Windows 7 installation has suddenly stopped booting. Instead of a start up screen I get: Windows failed to start. A recent hardware or software change might be the cause. File: \BOOT\BCD Status: 0xc000000d Info: An error occurred while attempting to read the boot configuration data Mac OS Lion starts up fine. I'm unable to mount my "Bootcamp" partition nor the "Storage" partition. On top of that "Storage" has been renamed to "disk0s5". When I installed Windows 7 it didn't recognize the "Storage" partition that was created in Lion so it merged what it thought was free diskspace (I'm assuming the same space that Mac OS recognized as Storage) to the Root Drive of Windows 7 (Bootcamp). Are you able to assist?

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • Windows Media Player 12 Library import keeps dying

    - by duckworth
    I cannot get WMP 12 to import my library. I have searched around various forums and tried all the common solutions like disabling Media Sharing, deleted my %LOCALAPPDATA%\Microsoft\Media Player directory and tried reimporting, etc. but nothing works. I have even removed the Media features from Windows setup and re-added them. I have a large mp3 collection shared on the network from another Windows box. I add the folder (tried as a mapped drive and UNC path) and it begins importing. After about 30 minutes into the import (the CurrentDatabase_372.wmdb hits just under 400MB) my WMP player stops importing and all of the icons in WMP turn to red x's and my library is gone. I close and reopen WMP 12 and the library is empty and the CurrentDatabase_372.wmdb is small and it strarts importing again. Rinse, lather, repeat. I am going nuts as WMP11 on Vista handles this same setup perfectly. I am at my wits end on what else to try. I am running a legit Windows 7 Ultimate X64 RTM install. Here is a screenshot of what WMP12 looks like when the import dies: Any other ideas? Edit: OK, I Just confirmed this is definitely a problem not specific to my computer or configuration. I just did a clean installation of Windows 7 Ultimate x86 on an old test machine, opened WMP12 and added the same network folder of mp3's and it crashed about an hour into the import with the same appearance as the screenshot I posted above and the library disappears. So the problem has to be one of several things: The large size of the library The fact that the library is on the network A specific file or file is causing it the player to crash

    Read the article

  • Extract large zip file (50 GB) on Mac OS X

    - by chingjun
    I was trying to move the files to another hard drive. So I archived all my photos in one large ZIP file using the Mac OS X built-in compress function. But the file failed to extract. I've tried many programs, but none of the programs I tried were able to extract the file. I've tried Mac OS X's extract utility, StuffIt Expander, 7-Zip (command line), all failed. Mac's archive utility and StuffIt don't seem to support large files, and 7-Zip's command line version gave an error stating unsupported archive. I have no luck in Windows either as many of my files have Chinese filenames, and couldn't extract to the correct name under Windows. Are there some programs that can support large files, can handle files compressed using Mac OS X's compress function, and can support UTF-8 filename? With or without GUI is fine. Update Well, I had made the wrong decision to compress the files, and it's already too late. I thought I should be able to extract the file if I could compress it. It's too late, the original copies are gone, only a large ZIP file left here. I have tried using 'unzip', but it says End-of-central-directory signature not found. I guess it doesn't have large file support as well. I would try the Windows Vista method as stated by SuperMagic, but I need to borrow a computer for that. Anyway, thank you everyone, but please provide more suggestions on what software that could possibly extract that file.

    Read the article

  • Win Server 2k and Win 7 client

    - by Ray Kruse
    I have a Win Server 2000 system with AD configured. The network consists of an OKI printer, a network server, a wifi router a Win 2k client and the server. I'm trying to connect a Win 7 client. The purpose of the network, besides sharing equipment is to move files from client to client and scatter backups over more than 1 machine. The Win 7 client is configured for DHCP and does in fact receive it's IP and DNS configuration from the server and it sees the printer, wifi router and network drive, but does not see the Win 2k client nor the Win 2k server. I have tried the LAN Management Authentication Level set to 'Send LM & NTLM responses' with the 128 bit encryption removed. I've also done the registry hack on the key 'LmCompatibilityLevel'. Neither of these have helped. I have two questions: Is there a fix or is Win 2k totally incompatible? Is the best (or quickest/cheapest) fix to upgrade the server to Win 2k3 and not worry about the Win 2k client? Thanks for any help. Ray Kruse Buffalo, KY

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • Unable to authenticate to Windows Server 2003 for file browsing as non-administrator user.

    - by Fopedush
    I've got a windows server 2003 box containing a raid 5 array I use for mass storage. I want to set up a special non-administrator account that can be used to browse files over the network, with only read access. Ideally I'll map my network drive as this user to avoid accidentally hosing my data, and mount as an administrator user on occasions where I actually need write access. I've created a non-administrator user on the Windows Server box (called "ReadOnly)", and granted the user read permissions on the folders I need. However, when I try to browse to the files, and authenticate as this user, I'm told "Permission denied". If I throw the readOnly user into the administrators group, however, I can authenticate and browse just fine. I am, of course, only attempting to browse to folder for which I have given this user read permissions. Obviously my ReadOnly user is missing some privilege here, but I can't figure out what it is. I've been digging around in group policy editor all day to no avail. What am I missing? Fake Edit: I'm doing my browsing from a Windows 7 box, but I don't think that is relevant.

    Read the article

  • Computer loses all installed programs and appears to return OS-only state

    - by Jake
    This is a story regarding 3 laptops of different brand and models. On separate occassions, I configured each of these Windows 7 / Vista computers with the necessary configuration and applications (which are supposedly the same actually), e.g. join office domain, same windows updates, microft office etc. These machinese were configure in our office in Singapore, and then they were taken to India for use. Someday in India, when booting up the laptop, all went fine except when it reach the log in screen, it was no longer possible to login with domain credentials. Logging into the laptop local admin account will lead to discover that the machine has returned to "OS-only state". All the configurations and applications were gone. The actual user profiles are still in the C: drive so files can still be retrieved, but under Control Panel Uninstall Programs it is evident that at least the registry is corrupted. The above scenario happen to the first 2 laptops. For the third, the system reports "Operating System Not Found" on boot up. I cannot think of any reason except to suspect a power fluctuation issue. Question is, will a power issue create this behaviour? What else can cause this issue?

    Read the article

  • What would keep a Microsoft Word AutoNew() macro from running?

    - by Chris Nelson
    I'm using Microsoft Office 2003 and creating a bunch of template documents to standardize some tasks. I know it's standard practice to put the templates in an certain place Office expects to find them but that won't work for me. What I want is to have "My Template Foo.dot" and "My Template Bar.dot", etc. in the "My Foo Bar Stuff" on a shared drive and users will double click on the template to create a new Foo or Bar. What's I'd really like is for the user to double click on the Foo template and be prompted for a couple of items related to their task (e.g., a project number) and have a script in the template change the name that Save will default to something like "Foo for Project 1234.doc". I asked on Google Groups and got an answer that worked....for a while. Then my AutoNew macro stopped kicking in when I created a new document by double clicking on the template. I have no idea why or how to debug it. I'm a software engineering with 25+ years of experience but a complete Office automation noob. Specific solutions and pointers to "this is how to automate Word" FAQs are welcome. Thanks.

    Read the article

  • How to recover deleted NTFS partitions?

    - by Frank
    Last night I made a terrible mistake. I was reinstalling Windows and I accidentally deleted all the partitions on all my drives. I realized my mistake before I had created any partitions, so nothing has been written to any of the disks. I'm currently at my wits' end about what I'll do if I don't manage to recover the data. I have two 1TB drives and a 2TB. One of the 1TB was the drive I was supposed to be reformatting so nothing to be recovered there. I am currently in a Linux livecd. In this article http://support.microsoft.com/kb/245725 Microsoft advises to recreate the exact same partition but choose not to format it, and then recover the backup boot sector from the end of the ntfs volume. But none of the drives I want to recover are bootable drives. So does that mean I do not need to rewrite the boot sector? As in if I simply recreate a partition of the same size it will see all my data? Or would I be better off using the TestDisk utility? http://www.cgsecurity.org/wiki/TestDisk Please help, I'm desperate!!

    Read the article

  • Why does the file date always change to the current date?

    - by Marshall
    We are a programming shop, but this i snot a programming question. My boss has put an external HD on the network. It contains the 'home' folders for users on the network. He uses it to place VB projects that he wants me to work on. But no matter what date and time he places a project on the drive, the file dates(modified) always shows the current date, though nothing in the files have changed. It makes it very hard to confirm that he has given me the latest versions. (He is not a fan of version control and nothing I do will convince him different.) Any ideas why this happens and how to prevent it from happening? P.S. As I wrote this I decided to add the last accessed date to the file display, and those dates happen to show the dates I expect to see. Why is the modified date getting changed, but not the accessed date. Does the accessed date change only when the files are opened or read, changed or not? Note: I use Directory Opus 9, a replacement for windows file browser. Thanks, Marshall

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >