Search Results

Search found 2041 results on 82 pages for 'deleting'.

Page 59/82 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Why won't Google Chrome install on my Windows 7?

    - by quakkels
    I can tell something is wrong with my computer. A little history: Google Chrome was not running correctly. When i was using facebook, any link on face that ran an ajax javascript function would not work. No error was being thrown, it just wasn't working. I uninstalled Chrome and reinstalled it. On re-install, a toolbar automatically installed with Chrome. BitTorrentBar. I did not install BitTorrentBar, it installed on it's own. Bad. When BitTorrentBar was active, then ajax functions on facebook worked. When I deactivated or uninstalled BittorrentBar from Chrome extensions, then ajax stopped working. I ran scanned the computer with Avast (free version) and SpyBot Search and Destroy (also free). They found nothing wrong. I uninstalled Chrome using Control Panel and then I went through my file system deleting anything that had "google", "BitTorrent", "chrome", or "conduit". Conduit seems to distribute BitTorrentBar. After doing all that, I tried to reinstall Chrome. I got this error: System.ComponentModel.Win32Exception: The system cannot find the file specified at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo startInfo) at System.Diagnostics.Process.Start() at System.Diagnostics.Process.Start(ProcessStartInfo startInfo) at ClickOnceBootStrap.ClickOnceEntry.Main() This error interupts the Chrome install every time! computer scans don't show anything wrong. So then, I thought this might be a .net framework error. Perhaps I deleted something I wasn't supposed to. So I reinstalled .NET Framework 4 and checked the repair option. I also ran Windows Updates. When I tried to download Chrome, again I got the same error as listed above. What should I do?

    Read the article

  • "Add correct host key in known_hosts" / multiple ssh host keys per hostname?

    - by Samuel Edwin Ward
    Trying to ssh into a computer I control, I'm getting the familiar message: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the RSA key sent by the remote host is [...]. Please contact your system administrator. Add correct host key in /home/sward/.ssh/known_hosts to get rid of this message. Offending RSA key in /home/sward/.ssh/known_hosts:86 RSA host key for [...] has changed and you have requested strict checking. Host key verification failed. I did indeed change the key. And I read a few dozen postings saying that the way to resolve this problem is by deleting the old key from the known_hosts file. But what I would like is to have ssh accept both the old key and the new key. The language in the error message ("Add correct host key") suggests that there should be some way to add the correct host key without removing the old one. I have not been able to figure out how to add the new host key without removing the old one. Is this possible, or is the error message just extremely misleading?

    Read the article

  • Makecert.exe hangs

    - by Robert
    I was following the steps in Scott Hanselman's blog post describing how to create a certificate authority and code signing certificate for PowerShell scripts. Initially, I created the certificate authority and a personal certifcate and used it to sign a powershell script successfully. All went as described in the blog post. The problem starts (as most do) when I did something that was (probably) stupid, although it seemed reasonable at the time. I wanted to start over and repeat the process again with a clean slate, so from the mmc certificates snap-in console, I deleted the personal certificate and the certificate authority I created previously. After that any time I try to use makecert, (just as I did the first time around), makecert either hangs or faults (which prompts to end or debug). Did I hose something up by deleting via the certificates snap-in? It didn't complain or warn me that it could be potentially hazardous. Is this just coincidence and something else entirely could be hosed? I have Event Log entries from the times when makecert crashed, which all look very similar; here is one: Log Name: Application Source: Application Error Date: 8/5/2009 3:55:04 PM Event ID: 1000 Task Category: (100) Level: Error Description: Faulting application makecert.exe, version 6.0.6000.16384, time stamp 0x4545910b, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e03821, exception code 0xc0000005, fault offset 0x00067409, process id 0xe58, application start time 0x01ca160efdf30625. Anyone have any ideas as to what exactly caused this and/or what I can do to fix it. I'm on 32-bit Vista Enterprise w/SP2.

    Read the article

  • Time Machine doesn't back up some folders/files (that it should)

    - by Eric
    MacBook Pro 17" (Snow Leopard) -- WD 2TB external drive MacBook Pro 13" (Snow Leopard) -- Seagate 1TB external drive I find that Time Machine sometimes doesn't back up new folders (and the files in them). This occurs both when I choose "Back Up Now" from the Time Machine icon in the Menu Bar and in TM's scheduled backups. These are not excluded folders (nor are then in the TM do-not-back-up list); they're perfectly normal folders (at various locations) inside my home folder. The only way to force them to be backed up is to restart the computer (unmounting & mounting the TM external disk does not help). There seems to be a correlation with new folders (i.e., it's more likely to happen that an entire new folder is not backed up), but this may just be observer bias (because those are the folders that I go check to see if they've been backed up). It's not computer dependent (it happens on two different computers). It's not external disk dependent (it happens on two different external disks). It's not time dependent (not restarting for several days does not fix the problem). What does a restart change that these other events don't? I'm considering deleting the /.fseventsd folder (without restarting the computer) to see if that helps. I haven't tried logging out and logging in (without restarting the computer).

    Read the article

  • Gentoo Linux useful utilities

    - by Alakdae
    I want to make a list of utilities that come in handy in Gentoo (general Linux tools available in all distributions also appreciated). What tools and commands do you use and consider helpful in administration of a Gentoo server? I will update the list with command from answers from time to time. eclean Utility for cleaning distfiles and binary packages. Usage example: eclean distfiles Usage example output: Cleans out the files in /usr/portage/distfiles. Pretty handy. Package: app-portage/gentoolkit eix Very useful tool for getting information about a package. Similar to "emerge -s" but much faster and more precise. Usage example: eix gentoolkit Usage example output: Show information about package such as: available versions, masked versions, installed versions and description. Package: app-portage/eix eix-test-obsolete Check system for obsolete, redundant, uninstalled entries in package.keywords, package.mask, package.unmask, package.use and package.cflags Usage example: eix-test-obsolete Usage example output: Shows non-matching entries, redundant entries, and uninstalled entries. Package: app-portage/eix equery Another very useful tool for getting information about packages (listing package files, checking which files belong to which package and much more) Usage example: equery b emerge Usage example output: Show which packages installed a file called emerge Package: app-portage/gentoolkit genlop Utility for extracting information about emerged ebuilds Usage example: genlop -l --date yesterday Usage example output: Show a list of packages that have been emerged yesterdayPackage: app-portage/genlop glsa-check Checks system if it's affected by GLSAs (security issues) Usage example: glsa-check -l affected Usage example output: List of GLSA that the system is affected by. Package: app-portage/gentoolkit rc-update Utility for managing (adding, deleting) runlevel scripts. Usage example: rc-update add syslog-ng default Usage example output: Adds syslog-ng to default runlevel. Package: sys-apps/baselayout revdep-rebuild Scans libraries and binaries for missing shared library dependencies Usage example: revdep-rebuild Usage example output: Gather binaries and libraries information, check for dependencies, rebuild packages with missing dependencies Package: app-portage/gentoolkit

    Read the article

  • MMC crashes on Windows Server 2008 x64 - Exchange console, event viewer

    - by David M Williams
    Help! I don't know what happened; this server has been very reliable but suddenly began having problems with a particular .NET 2.0 web site simply hanging - it wouldn't load at all. However, another ASP.NET site was still fine. Reinstalling the site didn't fix it, nor did deleting and re-creating the application within IIS. Trying the event viewer was met with a horrifying "Microsoft Management Console has stopped working". Some Googling led me to believe the .NET framework was the problem. I found a tool called the .NET cleanup tool - http://blogs.msdn.com/astebner/pages/8904493.aspx - which cleaned out .NET entirely. I reinstalled .NET 1.1 and 3.5 (which installed 2.0 and 3.0 as well). Using the .NET verification tool - http://blogs.msdn.com/astebner/pages/8999004.aspx - I believe these have all installed ok. However, my server is in worse shape now. The Exchange 2010 Management Console crashes with an MMC error and now my other (previously reliable) .NET web app now hangs on loading too. I thought I should use Computer Management to remove and re-add the application and web server roles but sure enough, MMC crashes. If anyone can help I will be extremely grateful. Thank you !

    Read the article

  • Mac OS X L2TP VPN won't connect

    - by smokris
    I'm running Mac OS X Server 10.6, providing an L2TP VPN service. The VPN works just fine when connecting from all computers except one --- this one computer stays at the "Connecting..." stage for a while, then says "The L2TP-VPN server did not respond". In the console, I see this: 6/7/10 10:48:07 AM pppd[341] pppd 2.4.2 (Apple version 412.0.10) started by jdoe, uid 503 6/7/10 10:48:07 AM pppd[341] L2TP connecting to server 'foo.bar.baz.edu' (256.256.256.256)... 6/7/10 10:48:07 AM pppd[341] IPSec connection started 6/7/10 10:48:07 AM racoon[342] Connecting. 6/7/10 10:48:07 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 1). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 2). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 3). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 4). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 5). 6/7/10 10:48:11 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:14 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:17 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). ...and the "retransmit" messages continue until the error message pops up. So far I've unsuccessfully tried: rebooting deleting the VPN profile and recreating it verifying the client's internet connection (it is able to reach the VPN server) connecting through several different networks (in case a router was blocking VPN packets) disabling the Mac OS X Firewall on the client making sure that the VPN settings exactly match those of other working computers running software update (the client is on 10.6.3) Any ideas?

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Error 53 - The network path was not found.

    - by Jack
    I have a machine in my Active Directory Domain that I can no longer "net view" from other machines in the domain. This is a Windows XP Pro machine. It is hosting a VMWare virtual of my Domain Controller. If I attempt to net view [machine name] I get system error 53, The network path was not found. This is not a DNS issue, the same thing happens with the machine's IP. I don't think it's a firewall issue, I turned the firewall off on this machine. As I mentioned, it has worked in the past, and then stopped for no reason that I can see. I (intentionally) didn't change the software. I CAN get to the VMs hosted on this machine, can connect to their shares, net view them, etc. All other machines can see each other. In fact, the problem machine can see other machines and access their shares just fine. I tried removing the machine from the domain and re-adding it. I tried deleting the shares and recreating them. Not sure how to troubleshoot this any further. Any ideas?

    Read the article

  • How to fix Browser Blue Screen of Death?

    - by WilliamKF
    I am running Windows XP SP3 and Firefox v3.6.2 and Internet Explorer and have an issue with Firefox and IE causing the Blue Screen of Death. If I run in Windows safe mode, it does not occur, but running normally, it seems my firefox profile is going bad and results in certain web pages causing the BSOD. IE is also getting BSOD on some pages. For example, presently, if I visit ebay.com in Firefox, it gets BSOD. It also fails when visiting http://www.google.com/ig?hl=en&source=iglk. IT removed my Firefox profile and that seemed to fix the issue for a while. However, now it has started occurring again. I turned off all firefox extensions and it still occurs. I'd like to fix my system so this does not occur. The IT folks don't seem to be able to solve this, so I am trying to fix it on my own. The BSOD is about something like (from memory) DRIVER_IRQL_NOT_LESS_OR_EQUAL. Why would safe mode avoid the issue and what does that tell us about the probable cause? I don't want to have to keep deleting my profile, so I'd like to find out the cause of the corruption.

    Read the article

  • Why are snapshots considered as temporary backups not real backups?

    - by Samselvaprabu
    I am using VMware ESXi. In our team we use to provide snapshots for long term backup. Then we faced issues like memory spillover and the server got hang up. I started reading in VMware knowledgebase articles and everywhere. Everywhere it was recommended not to have snapshots for a long time. Even VMware advised to keep snapshots for maximum of three days. But our team kept asking us to have at least two permanent snapshots (till deleting the VM). Sometimes we may use the VM for a year). one snapshot is for fresh machine state. (So when we complete testing an application, we will revert back to fresh state and install another application) (If I did not allow that, I may often need to host the VM.) Next snapshot for keeping the VM in some state (maybe they would have found an issue and keep that state for some time. Or they may install prerequisites for the application and keep the machine ready for testing.) Logically, their needs seems to be fair. But if I allow that, I am to permit them to hold the snapshots for long time. We are not using our VM as a mail server or database server. Why is keeping snapshots for long time having an adverse effect? Why are snapshots considered as temporary backups, not real backups?

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • "The directory name is invalid" when trying to install drivers in Windows 7 via Device Manager

    - by Luke
    First off, this computer is not mine, it's a customer's system. Having said that... The hard drive was moved to a new motherboard, CPU, RAM combo, and booted up fine. Customer puts in driver CD, drivers won't load. He brings it into me. Under Device Manager for Windows 7 x64, I see lots of PCI to PCI bridge, one SMBus Controller, and about 20 Unknown Devices. Greeeeeat... So I start with the SMBus driver directly from the Asus website for the motherboard (P8H77-M Pro). If I install from the setup program, it tells me to reboot, then it starts the install. It gets half way through the setup, then fails (An unknown error occurred. Setup will exit). When I try to point to the folder from Device Manager, it starts copying files for the driver, even presents me with the proper name of the device, but says that an error has occurred there as well: The directory name is invalid. Doing some Googling, I saw that many people had this issue with Vista. K, Vista and 7 are similar, maybe the solutions are the same... But they aren't. I tried: Copying the entire driver folder and setup utility to the Program Files folder and running it / selecting it in DM Downloading another set of drivers in case this one is corrupt Disabling UAC Deleting and recreating the %WINDIR%\TEMP folder Removing all references to previous hardware that I could find, even in Device Manager's hidden mode So far, nothing has worked. A wipe and reload will be out of the question

    Read the article

  • locked files on HFS+ home partition shared between OSX/Linux

    - by HazyBlueDot
    I dual boot into Arch Linux and OS X 10.6 on my MacBook pro. I synced my UID between both OSes and created an HFS partition (with no journaling) to use as a shared home/Users partition. For the most part it works just as I'd expect, but sometimes when I'm booted into OS X certain files are "locked" (when I get info on a particular file the "Locked" box is checked under the "General" pane. I can resolve the issue by manually unchecking the box) and/or I get "Operation not permitted" when I try deleting or chmod'ing a file. In both cases I don't see anything out of the ordinary on the permission bits displayed with ls -l, except for a trailing '@' character in the position where the sticky bit would normally occur: -rw-r--r--@ 1 myuser mygroup 296 Mar 29 11:44 myfile This '@' character shows up on ALL normal files, so doesn't seem to be linked to the locked/operation not permission situation. On the Linux side of things I never have permission problems. To the best of my limited knowledge and experience with ACLs I've not found any ACLs on any of the files in question. For what it's worth, I do most of my file editing using emacs (Aquamacs in OSX), is it possible it is setting weird permission bits? What is the "locked" setting that OS X uses and does it have a permission bit equivalent (so at the very least I could recursively unlock all files in my home directory from the terminal) why might some, but not other files get "locked" when booting into OS X what is the meaning of the '@' character?

    Read the article

  • Bash script to run a clamscan on Ubuntu- how to use return values properly?

    - by Marius
    I'm trying to put together a simple script that will scan my home directory with clamscan and give me a warning if any viruses were found. What I have so far is: #! /usr/bin/env bash clamscan -l ~/.ClamScan/$(date +"%a%b%d") -ir /home RETVAL=$? [ $RETVAL -eq 0 ] && notify-send 'clamscan finished. No viruses found' [ $RETVAL -eq 1 ] && notify-send 'clamscan found a virus' && touch ~/Desktop/VirusFound [ $RETVAL -eq 2 ] && notify-send 'clamscan encountered errors. Check the logs' && touch ~/Desktop/ClamscanError find ~/.ClamScan/* -mtime +7 -exec rm {} \; However, I'm unsure about a couple of things: I'm always wary of using rm- as far as I can tell, the find command I've got should be deleting any log files that are more than a week old. I'm also not entirely sure how the return value testing works- I've got a manual that briefly covers bash, which says that the meaning of $? is "match one character", and I'm not entirely sure how that grabs the return value. Should I be using -eq or = for testing the return value? From what I can tell -eq tests strings and = tests numerals, but I'm not sure what the type of the return value is.

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • Primary zone will not transfer to secondary zone

    - by Matt Beckman
    Using DNS on Windows Server 2008, there is a constant struggle with adding primary and secondary zones. I will add a primary zone to NS1 for a new domain, edit it as needed, and when it's ready add the secondary zone to NS2. However, MOST of the time, the secondary zone remains in an error state, and will never acquire the primary zone data. I have gone back to domains a few weeks after adding them to find out that Windows never propagated the change. Annoying. Anyway, I recently updated SP1 to SP2 thinking this would help, but it hasn't. I added two new domains today, and spent an hour after the secondary zone would just not sync. During that time, the only error in the logs I had seen was for one of them where DNS complained about not being authoritative. In order to eventually resolve the issue, I ended up deleting the primary zone, creating a new primary zone, and hitting "Apply" after each and every field change. For example, after modifying the serial number from "1" to a date appropriate "2010093001", I hit apply, and then the Primary Server (apply), Responsible Person (apply), and finally Name Servers (apply). After I did this, the secondary zone didn't waste any time getting the data. Ideas?

    Read the article

  • MSSQLSERVER Will Not Start - Event ID 913 and 1814

    - by ThaKidd
    Hello ServerFault! I need some serious help. I have a major database server down and am scratching my head at how to fix it. The server was hit by rolling black outs last week in Dallas and sense then, Microsoft SQL 2005 SP2 will not start up. I am getting the following errors (both when starting the service and while trying to execute mssqlsrv.exe -c -f -m: Event Type: Error Event Source: MSSQLSERVER Event ID: 913 Could not find database ID 3. Database may not be activated yet or may be in transition. Reissue the query once the database is available. If you do not think this error is due to a database that is transitioning its state and this error continues to occur, contact your primary support provider. Please have available for review the Microsoft SQL Server error log and any additional information relevant to the circumstances when the error occurred. and... Event Type: Information Event Source: MSSQLSERVER Event ID: 1814 Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized. I have tried to rename the tempdb.mdf to tempdb.old with no success. I have checked and have 193 GB of free hard drive space. What else might cause this problem? Could the server need a chkdsk ran on it or do I need to be looking at some area of the database server? Any help is greatly appreciated. Thank you in advance.

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • Windows 7 libraries nightmare

    - by Lobuno
    In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Ubuntu Device-mapper seems to be invincible!

    - by Andrew Bolster
    I'm working on a hopefully unrelated question question and I've got to a strange situation. First: I know very little about the very low level hardware kernal storage driver magix, so I'm hoping a) someone can help and b) someone can explain it to me better. I've been trying a dozen different configurations of my 2x500GB SATA drives over the past few hours involving switching between ACHI/IDE/RAID in my bios; After each attempt I've reset the bios option, booted into a live CD, deleting partitions and rewriting partition tables left on the drives. Now, however, I've been sitting with a /dev/mapper/nvidia_XXXXXXX1 that seems to be impossible to kill! its the only 'partition' that i see in the Ubuntu install (but I can see the others in parted) but it is only the size of one of the drives, and I know I did not set any RAID levels other than RAID0. Anyone have any ideas how I can kill this and get back to just two independent IDE drives? Or can anyone convince me of a reason to go the AHCI route? Many thanks in advance.

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • Outlook 2007 - Right Click Email > Move To {Folder Name}

    - by HK1
    I know it seems like an elementary question. What's the simplest and fastest way to move a read/completed email to a different folder in Outlook 2007 (connected to Exchange 2007)? I have a particular user that is challenged by technology. Using keyboard shortcuts is not an option. Dragging and dropping things - forget it. And too many clicks is frustrating to him. He keeps his email inbox completely clean (OCD=True) but he does that by deleting every single email as quickly as he's done with it. If an email can't be resolved in a day or two it almost drives him to insanity. As far as he's concerned, there's only one right thing to do with an email - reply to it and then delete it. He's being asked to save emails unless they are clearly trash. I'm trying to figure out what the simplest method is to move an email to a "Saved Emails" or "Archived" folder (don't confuse "folder" with .PST file, that's irrelevant for this discussion). I envisioned that I could possibly hi-jack every delete and put the email in his Saved folder. But I don't like this option because some emails are truly trash and I don't want him saving those. What I'd really like to do is something like this: Right Click Email in List > Move To {Folder Name} Is there a simple way to do this? Maybe someone has another suggestion on how to handle this situation.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >