Search Results

Search found 2041 results on 82 pages for 'deleting'.

Page 72/82 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • "The breakpoint will not currently be hit" error while debugging a mixed mode application (c# and unmanaged c++)

    - by user1678403
    While debugging a mixed mode application in VS2010, the breakpoint set on a line of code contained in an unmanaged c++ dll source file (called from a managed c# wrapper class) shows the infamous "The breakpoint will not currently be hit. No symbols have been loaded for this document" info message when hovering the mouse over the breakpoint on the line in question. The breakpoint itself is a red circle with a yellow info triangle instead of the usual solid red orb. Of course, the breakpoint isn't hit when the debugger is executed. Most answers I've found for this warning indicate the breakpoint hasn't been set properly, or that the expected dll is not being loaded, or that the associated pdb file is not located in the correct location, etc. etc. This is not the problem. The application does load and execute the referenced dll correctly. I've verified that the correct pdb file, with the same file date as its dll, is located in the executable's working directory along with the target dll itself. The debugger simply doesn't load the symbols for the dll, and the dll doesn't show in the Modules list. None of the solutions I've found online work for this problem. The dll doesn't show in the modules list available from 'Debug-Windows-Modules' menu selection... even though it is, in fact, loaded. Breakpoints set in the wrapper class work correctly. Deleting the bin and obj directories, cleaning and rebuilding the solution also doesn't help.

    Read the article

  • Programmatically add an ISAPI extension dll in IIS 7 using ADSI?

    - by fretje
    I apologize beforehand, this is a cross post of this SO question. I thought I'd ask it there first, but apparently it doesn't harvest any answers there. I hope it will get more attention here. When I have an answer somewhere, I'll delete the other one. I'm trying to programmatically add an ISAPI extension dll in IIS using ADSI. This has been working for ages on previous versions of IIS, but it seems to fail on IIS 7. I am using similar code like shown in this question: var web = GetObject("IIS://localhost/W3SVC/1/ROOT/specificVirtualDirectory"); var maps = web.ScriptMaps.toArray(); map[maps.length] = ".aaa,c:\\path\\to\\isapi\\extension.dll,1,GET,POST"; web.ScriptMaps = maps.asDictionary(); web.SetInfo(); After executing that code, I do see an "AboMapperCustom-12345678" entry for that specific dll in the "Handler mappings" of the specific virtual directory in which I added the script map. But when I try to use that extension in a browser, I always get HTTP Error 404.2 Not Found The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server. Even after adding an entry to allow that specific dll in the "ISAPI and CGI restrictions", I keep getting that error. To make it actually work, I first have to undo these steps (encountering the same issue like the OP of the question mentioned above: after deleting the script map entry from the IIS manager GUI, I also have to programmatically delete it using ADSI before it's actually gone from the metabase). And then manually add an entry like this: inetmgr - webserver - website - virtual directory - handler mappings - add script map... path = *.dll, executable = <path to dll>, name = <doesn't matter, but it's mandatory> click "yes" on the question "do you want to allow this ISAPI extension?" When I compare the 2 entries, they are exactly the same, except for the "Entry Type" which seems to be "Inherited" for the programmatically added one and "Local" for the one added manually. The strange thing is, even though it says "Inherited", I don't see it anywhere in IIS on a higher level. Where is it inheriting from? In my code, I do add the script map to the specific virtual directory so it should be "Local" as well. Maybe there is the problem, but I don't know how to add a "Local" Script Map using ADSI. I really would like to keep using the ADSI method, as otherwise I will have to use different methods in our setup when working with IIS 7 or previous versions, and I would like to avoid that. To recap: How can I programmatically add a script map entry and its companion CGI and ISAPI restrictions entry to IIS 7 using ADSI? Anybody who can shed some light on this? Any help appreciated.

    Read the article

  • Cacti: "An internal Net-Snmp error condition detected in Cacti snmp_count"

    - by Recc
    There's the odd forum topic about an error similarly obscure as this, but I haven't seen any for snmp_count in particular. Also I don't see graphing problems, though I can't simply go and eyeball all graphs. However the poller does time out and has to be stopped by its internal process preventing overruns. If I filter out the flood of this error in the log I dont get anything else except the poller timeout: 06/12/2014 12:48:00 PM - POLLER: Poller[0] Maximum runtime of 58 seconds exceeded. Exiting. 06/12/2014 12:48:00 PM - SYSTEM STATS: Time:58.8566 Method:spine Processes:1 Threads:40 Hosts:1923 HostsPerProcess:1923 DataSources:61584 RRDsProcessed:0 06/12/2014 12:48:00 PM - SPINE: Poller[0] ERROR: Spine Timed Out While Processing Hosts Internal I saw in the running processes /usr/local/spine/spine 0 2053 that's always left behind. When I kill it the flooding of the error stops. Of course it's the same on the next poll run as it goes through the devices. 2053 is apparently the DB ID for a device. I deleted it completely to see if that stops it. It doesn't, instead 2052 is seen there. I suspect It'll be the same if I keep deleting devices which I will not do. This started happening midday when I wasn't doing anything to the cacti server. I have tried reducing Maximum Threads per Process to 1 and Number of PHP Script Servers to 1. I've been running it at 10 script servers / 40 threads for months with poll cycle time of about 20 sec. I just found out Running snmpwalk on any host would begin returning the values but then timeout halfway through. This doesn't happen from different servers on the network this Cacti is suggesting still that it's a problem with it locally. Any suggestions? For one polling cycle I changed to use cmd.php instead. then I started getting errors like CMDPHP: Poller[0] Host[45] DS[541] WARNING: Result from SNMP not valid. Partial Result: U Perhaps as expected. Looking closely I see that every snmpwalk I do is interrupted at the same place as if some byte limit is hit and the connection torn down.

    Read the article

  • SBS 2008 Backup Drive Full - Error Code '2147942512'

    - by HK1
    We are using Windows Backup on SBS 2008 SP2 and backing up to 1TB external hard drives. Recently after switching drives our backup started failing because the backup drive is full and auto-delete isn't automatically deleting older backups/show copies. I'm trying to get more information to help me effectively prevent this problem from reoccurring in the future. How I can tell that the drive is getting full: In the event viewer under Windows Logs Application, I'm seeing Event ID 517 but it fails to show an intelligible description. However, under Applications and Services Logs Microsoft Windows Backup Operational, I'm seeing an event with the ID of 5 and a description like this: Backup started at '10/4/2011 12:30:12 PM' failed with following error code '2147942512'. One of the most informative posts I've found on this error is located on Microsoft's Technet Forums here. In that post, a Microsoft representative gives this hazy explanation: auto-delete feature to ensure that at least some old backup copies are maintained on the disk -- does not automatically delete backups if space utilization by older copies is less than 1/8 of the disk size or in other words, 13% of the disk size. that means if the one full backup copy does not fit in the 7/8 of the disk size, backup may fail with disk full error. auto-delete will not automatically delete older versions to reclaim more older versions of backup. In the above explanation, I do not understand what is meant by "older copies" except that it appears that anything older than the very last shadow copy would be considered "older copies". I'm going to make the assumption that this problem where auto-delete will not work will affect any hard drive that is large enough to make an effective backup drive, or in other words, any hard drive that is large enough to hold more than one backup/shadow copy at once. The same MS representative proposes the solution of using a larger backup drive. I can't understand how this will help. It appears to me it will simply delay the problem until a later date. In order to resolve this problem for now, I did the following: Assign the backup drive a disk letter under disk management. Run the command line with Administrative rights. diskshadow.exe [enter] delete shadows oldest x: [enter] (where X: is the letter you assigned your backup drive) I manually ran the above command some 60 or 80 times to free up about 200 GB of space on my 1 Terrabyte External Hard drive. However, I do not feel this is a satisfactory solution to prevent the problem from happening again in the future. Does anyone have a solution to prevent your Windows Server backup drive from getting full?

    Read the article

  • Lion built-in VPN client times out connecting to Windows 2003 PPTP server

    - by beporter
    I have a new iMac with OS X 10.7 (Lion) on it that refuses to connect to a PPTP-based VPN server (running Windows 2003 SBS). To shortcut past a lot of questions: There is a Dell workstation running Windows 7 on the same LAN as the Mac that is able to establish a PPTP connection to the same VPN server using the same credentials. That would seem to rule out any possible problems with the server, the port forwards on the server's firewall, the internet connection between the two, and the router local to the Dell and iMac. Here's a "verbose" dump of the PPP log from the iMac: Tue Sep 6 10:13:11 2011 : using link 0 Tue Sep 6 10:13:11 2011 : Using interface ppp0 Tue Sep 6 10:13:11 2011 : Connect: ppp0 socket[34:17] Tue Sep 6 10:13:11 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0, interfaceIndex: 0, Protocol: None, Private Port: 0, Public Address: 45f6f181, Public Port: 0, TTL: 0. Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 inconsistent. is Connected: 1, Previous interface: 4, Current interface 0 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 initialized. is Connected: 1, Previous publicAddress: (0), Current publicAddress 45f6f181 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 fully initialized. Flagging up Tue Sep 6 10:13:14 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:17 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:20 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:23 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:26 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:29 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:32 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:35 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:38 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:41 2011 : LCP: timeout sending Config-Requests Tue Sep 6 10:13:41 2011 : Connection terminated. Tue Sep 6 10:13:41 2011 : PPTP disconnecting... Tue Sep 6 10:13:41 2011 : PPTP clearing port-mapping for en0 Tue Sep 6 10:13:41 2011 : PPTP disconnected The error seems to be focused around the line, LCP: timeout sending Config-Requests, but I haven't had any luck in finding troubleshooting information for this. I've tried completely deleting the entire VPN "connection" from the Network prefpane and recreating it from scratch. I am certain the connection details are correct because they exactly match what successfully connects from the Win7 machine sitting next to the iMac. Any suggestions?

    Read the article

  • Using Diskpart in a PowerShell script won't allow script to reuse drive letter

    - by Kyle
    I built a script that mounts (attach) a VHD using Diskpart, cleans out some system files and then unmounts (detach) it. It uses a foreach loop and is suppose to clean multiple VHD using the same drive letter. However, after the 1st VHD it fails. I also noticed that when I try to manually attach a VHD with diskpart, diskpart succeeds, the Disk Manager shows the disk with the correct drive letter, but within the same PoSH instance I can not connect (set-location) to that drive. If I do a manual diskpart when I 1st open PoSH I can attach and detach all I want and I get the drive letter every time. Is there something I need to do to reset diskpart in the script? Here's a snippet of the script I'm using. function Mount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [Parameter(Position=1,Mandatory=$false,ValueFromPipeline=$false)] [string]$DL, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## Diskpart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path" `nAttach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 3 @" Select VDisk File="$Path"`nSelect partition 1 `nAssign Letter="$DL" Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart } end { Remove-Item -Path $DiskpartScript -Force ; "" Write-Host "The VHD ""$Path"" has been successfully mounted." ; "" } } function Dismount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [switch]$Remove, [switch]$NoConfirm, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } function RemoveVHD { switch ($NoConfirm) { $false { ## Prompt for confirmation to delete the VHD file ## "" ; Write-Warning "Are you sure you want to delete the file ""$Path""?" $Prompt = Read-Host "Type ""YES"" to continue or anything else to break" if ($Prompt -ceq 'YES') { Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } else { "" ; Write-Host "Script terminated without deleting the VHD file." ; "" } } $true { ## Confirmation prompt suppressed ## Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } } } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## DiskPart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path"`nDetach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 10 } end { if ($Remove) {RemoveVHD} Remove-Item -Path $DiskpartScript -Force ; "" } }

    Read the article

  • iPhone 3G refuses to transfer purchased apps to iTunes

    - by andynormancx
    My iPhone 3G refuses to transfer purchased apps to iTunes. This is causing me major problems with syncing. Whenever I attempt to transfer apps from the iPhone to iTunes it goes through the motions, but never actually transfers anything. It displays the various apps in the info area at the top of the screen, but the progress bar never advances. In comparison when I sync other iPhones, using the same install of iTunes, the progress bar advances and apps are transferred. The same also happens on clean installs of iTunes on other computers, it seems to be my iPhone that is the common factor. I have tried restoring the phone from a backup, which makes no difference. This started happening months ago and the phone has since been upgraded to 3.0 and 3.1, but the problem still persists. Originally it was just a minor irritation, but I made and attempt to fix it which has made things worse. I deleted all the apps from with iTunes and then did "Transfer purchases" in the hope that it might fix something. It didn't fix anything. Also, I cannot now sync at all. If I do sync iTunes now does "transferring purchases", fails to transfer and then deletes all the apps (and data) from my iPhone. It also means I can't sync music, podcasts or anything else. I can't sync anything else, because I can't temporarily turn off app syncing because then iTunes warns that the apps on the iPhone will be deleted. I also tried de-authorising and re-authorising. What can I do to get app syncing working again ? P.S. I have considered deleting all the apps and reinstalling them one by one, in the hope that it will fix the problem. However I don't really want to embark on doing that for 55+ apps and re-entering login details etc for the apps that need them, especially as I might then find out it didn't solve the problem. Update: The latest update to iTunes 9 has improved things in one key aspect. If I let a sync run to completion iTunes no longer deletes all the apps from my phone. So I can now sync all my other data, even if I still can't sync my apps. Resolved: See my answer to the question for how I finally resolved the problem.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • NTPD issue - syncs then slowly loses ground

    - by ethrbunny
    RHEL 5 workstation. Has been running smoothly for years. I did a 'pup' recently and followed with a nice, cleansing reboot. Afterwards the system had some startup issues: namely MySQL refused to start. It just went "...." for 5-10 minutes before I did another boot and skipped that step (using 'interactive'). This was the only service that didn't wan't to start normally. So now that the system is booted I've found that it doesn't want to stay in sync with the NTP master and after 48 hours is refusing any SSH other than root. NTPD: this service starts normally and gets a lock on 4 servers. Almost immediately it starts to lose ground and now (after 3 days) is almost 40 hours behind. If I stop/start the service it gets the lock, resets the system clock and starts losing ground again. The 'hwclock' is set properly and maintains its time. Login: when I (re)start the ntp server I am able to login normally. I assume this problem is due to losing sync with LDAP. This appears to be verified by LDAP errors in /var/log/messages. Suggestions on where to look? ADDENDA: Tried deleting the 'drift' file. After a bit it gets recreated with 0.000. from /var/log/messages: Jan 17 06:54:01 aeolus ntpdate[5084]: step time server 129.95.96.10 offset 30.139216 sec Jan 17 06:54:01 aeolus ntpd[5086]: ntpd [email protected] Tue Oct 25 12:54:17 UTC 2011 (1) Jan 17 06:54:01 aeolus ntpd[5087]: precision = 1.000 usec Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, 0.0.0.0#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, ::#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, ::1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, fe80::213:72ff:fe20:4080#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, 127.0.0.1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, 10.127.24.81#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: kernel time sync status 0040 Jan 17 06:54:02 aeolus ntpd[5087]: frequency initialized 0.000 PPM from /var/lib/ntp/drift Jan 17 06:54:02 aeolus ntpd[5087]: system event 'event_restart' (0x01) status 'sync_alarm, sync_unspec, 1 event, event_unspec' (0xc010) You can see the 30 second offset. This was after about one minute of operation.

    Read the article

  • keyboard mappings are totally screwed after updating to kde4

    - by zeonglow
    I recently upgraded from KDE 3.5 to KDE 4, and I have been having weird issues with my keyboard. In one of the virtual consoles e.g. when I press ctrl + alt 1 , I can type perfectly, but in KDE, several of the number keys don't work, the left and right arrows don't work either. When I press the right arrow key in xev I get this: KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 903459, (111,55), root:(115,836), state 0x10, keycode 114 (keysym 0x1008ff11, XF86AudioLowerVolume), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False When I press the '3' key it toggles my Bookmarks toolbar in Firefox, in xev I get this: KeyPress event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 999968, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x3600001, root 0x6f, subw 0x0, time 1000032, (94,115), root:(98,896), state 0x10, keycode 12 (keysym 0x1008ff30, XF86Favorites), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False As this is deeper down, changing the type of keyboard in the KDE meun's has no effect. I'm slowly beginning to wade through the mountains of documentation about the X keyboard model, but there has to be a better way. Does anyone no what it is? Edit: 1234567890 ! after deleting the entire .kde folder. but only until I change the Keyboard settings from the "system settings" applet, then its hosed full time. Regardless of what I set the settings too. (restore to default settings doesn't) 2nd Edit: I'm using Gentoo AMD64, I was upgrading from KDE 3.5 KDE 4.2. I think I had manual settings before, although I didn't change anything. I was originally running KDE without HAL until that stop working a year or so ago. The only customisation I made was to set the multimedia keys to control Amarok. 3rd Edit $ grep xkb /var/log/Xorg.0.log (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "evdev" (**) Option "xkb_layout" "us" Xorg.0.log has this to say: (WW) AllowEmptyInput is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. (WW) Disabling Mouse1 (WW) Disabling Keyboard1 My Xorg.conf has this in it. Identifier "Keyboard1" Driver "kbd" Option "AutoRepeat" "500 30" # Specify which keyboard LEDs can be user-controlled (eg, with xset(1)) Option "XkbRules" "xorg" Option "XkbModel" "pc105" Option "XkbLayout" "gb"

    Read the article

  • How to properly shutdown a Linux VMware Server Host

    - by Mikee
    Hi Everybody: In our lab, we have a server running Ubuntu Linux 8.04.4 x64, with VMware Server 2.1 hosting 4 VM's. I have a major concern with regards to shutting down the host server. Mostly, how do I ensure that the guest VM's are being shut down safely? In the VMware web interface console, I have enabled: "Allow virtual machines to start and stop automatically with the system" I enabled the Default Startup Delay for 15 seconds along with the "Start next VM immediately if the VMware Tools start" option checked I enabled the Default Shutdown Delay with a 60 second shutdown delay and a Shutdown Action of "Shut Down Guest" All VM's have the VMware Tools installed and properly working. All VM's are moved up into the "Specified Order" section of "Startup Order", thus when powering the server back on, all those VM's should start up again in that specified order. When I went to shut down the server, I used the shutdown -h now command. Based on the settings I entered above, I was expecting a 4 minute shutdown, as there is an option to delay the shutdown of each VM by 60 seconds. However, that is not what happened. Instead, the server shutdown in under a minute. When I powered the server back on, only 2 VM's properly loaded. The other 2 showed the following error: "Power on Virtual Machine" failed to complete If these problems problems persist, please contact your system administrator. Details: Cannot open the disk '[location to .vmdk]' or one of the snapshot disks it depends on. Reason: Failed to lock the file. Obviously, if this error occured, then it is clear to me that the VM's were not properly shutdown, or the server powered off before the VM's were completely shutdown. I have fixed the above error by deleting the .lck files in the respective VM directories. How would I know if the VM's were properly shut down? I checked the VMware-server logs, but they only seem to display the logs of when the vmware-mgmt service is running in the current session. I'm mostly running Linux VM's, so is there an easy way to know whether or not a server was properly shut down in Linux? Thank you all for the help!

    Read the article

  • Active Directory Partition Error

    - by BLAKE
    Right now my active directory is failing a dcdiag test. I can find no info online about this error. When I run dcdiag /test:crossrefvalidation, I get the output: .... Doing primary tests Testing server: Default-First-Site-Name\ad01 Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : mydomain Starting test: CrossRefValidation ......................... mydomain passed test CrossRefValidation Running partition tests on : t Starting test: CrossRefValidation This cross-ref has a non-standard dNSRoot attribute. Cross-ref DN: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration, DC=mydomain,DC=com nCName attribute (Partition name): DC=t Bad dNSRoot attribute: dc01.mydomain.com Check with your network administrator to make sure this dNSRoot attribute is correct, and if not please change the attribute to the value below. dNSRoot should be: t It appears this partition (DC=t) failed to get completely created. This cross-ref (CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configurat ion,DC=mydomain,DC=com) is dead and should be removed from the Active Directory. ......................... t failed test CrossRefValidation .... I used LDP from the windows support tools. I searched for the dnsRoot attribute in "cn=partitions,cn=configuration,dc=mydomain,dc=com", with the filter "(&(objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))" I got the result: ***Searching... ldap_search_s(ld, "cn=partitions,CN=Configuration,DC=mydomain,DC=com", 1, "(& (objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))", attrList, 0, &msg) Result <0>: (null) Matched DNs: Getting 3 entries: >> Dn: CN=65502be3-fc90-442a-83d8-4b3b91e82439,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ForestDnsZones.mydomain.com; >> Dn: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ad01.mydomain.com; >> Dn: CN=f0ef5771-6225-4984-acd9-c08f582eb4e2,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: DomainDnsZones.mydomain.com; It looks like the bad partition has the name of my first domain controller 'ad01.mydomain.com'. I have googled for a while and have not been able to find any help or documentation about application partitions in Active Directory. Does anyone have any advice on how to cleanup this partition (or what the partition is for)? Does anyone know the repercussions for deleting this partition?

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • Why is my Linux box dropping network connection? [closed]

    - by Robo
    I have a Debian server in the form of a Raspberry Pi running Raspian. It has a USB Wi-Fi connection. Sometimes it would not respond when I SSH to it, and would require a reboot. I found something in syslog that may indicate what the problem is, can someone help with what this means? Dec 16 15:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 16:17:01 raspberrypi /USR/SBIN/CRON[2109]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 16:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 17:17:01 raspberrypi /USR/SBIN/CRON[2127]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 17:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 18:17:01 raspberrypi /USR/SBIN/CRON[2142]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 18:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 19:17:01 raspberrypi /USR/SBIN/CRON[2161]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 19:31:29 raspberrypi kernel: [16615.391509] ieee80211 phy0: wlan0: No probe response from AP 00:21:29:6c:5c:3d after 500ms, disconnecting. Dec 16 19:31:29 raspberrypi wpa_supplicant[1501]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:21:29:6c:5c:3d reason=4 Dec 16 19:31:29 raspberrypi kernel: [16615.416189] cfg80211: Calling CRDA to update world regulatory domain Dec 16 19:31:30 raspberrypi ifplugd(wlan0)[1444]: Link beat lost. Dec 16 19:31:40 raspberrypi ifplugd(wlan0)[1444]: Executing '/etc/ifplugd/ifplugd.action wlan0 down'. Dec 16 19:31:40 raspberrypi wpa_supplicant[1501]: wlan0: CTRL-EVENT-TERMINATING - signal 15 received Dec 16 19:31:40 raspberrypi ifplugd(wlan0)[1444]: Program executed successfully. Dec 16 19:31:42 raspberrypi ntpd[1928]: Deleting interface #2 wlan0, 192.168.1.10#123, interface stats: received=321, sent=327, dropped=0, active_time=16596 secs Dec 16 19:31:42 raspberrypi ntpd[1928]: 202.6.116.123 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 203.99.128.34 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 203.118.148.40 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 202.89.49.65 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: peers refreshed

    Read the article

  • File Not Found error launching Guest OS in a non-administrator user account

    - by ToreTrygg
    Hi, I am running Fusion 2.0.6 (196839) on an iMac running 10.6.2 with 3 user accounts (1 Administrator). I have Fusion set up to share the Guest OS, and it's been working splendidly for nearly a year. Within the Guest OS (Windows XP PRO), there are also 3 user accounts (1 Administrator). Last night I went to back up my VM to an external drive, and to minimize the file size and transfer time, I deleted all Snapshots but the most-recent one. I then backed up the VM externally (28.23 GB). Today, one of my users tried to launch the Guest OS from within her user account, and received the following error message: "File Not Found: Windows XP Pro-000006.vmdk "This file is required to power on this virtual machine. If this file was moved, please provide its new location." My two choices are Cancel and Browse. When I browse, I can locate the Windows XP Pro-000006.vmdk file, which appears to be contained within the VM file (Windows XP Pro.vmwarevm). However, it still won't launch from a non-Admin user account. If I view the package contents of the VM file from the user account, the above file is present and appears to be created upon each launch of the Guest OS. If I go back to my Administrator account on the Mac and then launch Fusion, the Guest OS works perfectly for all 3 user accounts within XP Pro. I have tried to delete the Guest OS from Fusion's Library within the problem user account, then re-connect it to that Library, but the result is the same. The Guest OS data integrity is 100% -- but is accessible only from the OS X Administrator account. This problem only surfaced after deleting several older Snapshots. Again, the data is there, the Guest OS powers up normally in the Mac's Administrator account, but persistently returns the above error when attempting to power on from a non-Admin account on the Mac. I'm not sure how this is affecting the error, but when I look at Hard Disk Settings, the "unable-to-locate" file is the filename of the virtual HD. I don't want to make any changes to my (working) VM without any advice from the knowledgeable people on this forum. Any help will be greatly appreciated, thanks!

    Read the article

  • How Do I Properly Run OfflineIMAP in a Crontab

    - by alharaka
    Installed Fedora. # cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }' Fedora release 14 (Laughlin) Installed offlineimap from yum, cuz I'm lazy these days. # yum info offlineimap | awk ' { print F "> " $0; print ""; }' Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Installed Packages Name : offlineimap Arch : noarch Version : 6.2.0 Release : 2.fc14 Size : 611 k Repo : installed From repo : fedora Summary : Powerful IMAP/Maildir synchronization and reader support URL : http://software.complete.org/offlineimap/ License : GPLv2+ Description : OfflineIMAP is a tool to simplify your e-mail reading. With : OfflineIMAP, you can read the same mailbox from multiple : computers. You get a current copy of your messages on each : computer, and changes you make one place will be visible on all : other systems. For instance, you can delete a message on your home : computer, and it will appear deleted on your work computer as : well. OfflineIMAP is also useful if you want to use a mail reader : that does not have IMAP support, has poor IMAP support, or does : not provide disconnected operation. And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc. [general] ui = TTY.TTYUI accounts = Personal, Work maxsyncaccounts = 3 [Account Personal] localrepository = Local.Personal remoterepository = Remote.Personal [Account Work] localrepository = Local.Work remoterepository = Remote.Work [Repository Local.Personal] type = Maildir localfolders = ~/mail/gmail [Repository Local.Work] type = Maildir localfolders = ~/mail/companymail [Repository Remote.Personal] type = IMAP remotehost = imap.gmail.com remoteuser = [email protected] remotepass = password ssl = yes maxconnections = 4 # Otherwise "deleting" a message will just remove any labels and # retain the message in the All Mail folder. realdelete = no [Repository Remote.Work] type = IMAP remotehost = server.company.tld remoteuser = username remotepass = password ssl = yes maxconnections = 4 I have tried TTY.TTYUI, NonInteractive.Quiet and NonInteractive.Basic with different variations. With or without redirection, the crontab entries I try cause problems. $ crontab -l | awk ' { print F "> " $0; print ""; }' */5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1 */5 * * * * offlineimap I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?

    Read the article

  • chrooting user causes "connection closed" message when using sftp

    - by George Reith
    First off I am a linux newbie so please don't assume much knowledge. I am using CentOS 5.8 (final) and using OpenSSH version 5.8p1. I have made a user playwithbits and I am attempting to chroot them to the directory home/nginx/domains/playwithbits/public I am using the following match statement in my sshd_config file: Match group web-root-locked ChrootDirectory /home/nginx/domains/%u/public X11Forwarding no AllowTcpForwarding no ForceCommand /usr/libexec/openssh/sftp-server # id playwithbits returns: uid=504(playwithbits) gid=504(playwithbits) groups=504(playwithbits),507(web-root-locked) I have changed the user's home directory to: home/nginx/domains/playwithbits/public Now when I attempt to sftp in with this user I instantly get the message: connection closed Does anyone know what I am doing wrong? Edit: Following advice from @Dennis Williamson I have connected in debug mode (I think... correct me if I'm wrong). I have made a bit of progress by using chmod to set permissions recursively of all files in the directly to 700. Now I get the following messages when I attempt to log on (still connection refused): Connection from [My ip address] port 38737 debug1: Client protocol version 2.0; client software version OpenSSH_5.6 debug1: match: OpenSSH_5.6 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user playwithbits service ssh-connection method none debug1: attempt 0 failures 0 debug1: user playwithbits matched group list web-root-locked at line 91 debug1: PAM: initializing for "playwithbits" debug1: PAM: setting PAM_RHOST to [My host info] debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user playwithbits service ssh-connection method password debug1: attempt 1 failures 0 debug1: PAM: password authentication accepted for playwithbits debug1: do_pam_account: called Accepted password for playwithbits from [My ip address] port 38737 ssh2 debug1: monitor_child_preauth: playwithbits has been authenticated by privileged process debug1: SELinux support disabled debug1: PAM: establishing credentials User child is on pid 3942 debug1: PAM: establishing credentials Changed root directory to "/home/nginx/domains/playwithbits/public" debug1: permanently_set_uid: 504/504 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 2097152 max 32768 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request env reply 0 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req env debug1: server_input_channel_req: channel 0 request subsystem reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req subsystem subsystem request for sftp by user playwithbits debug1: subsystem: cannot stat /usr/libexec/openssh/sftp-server: Permission denied debug1: subsystem: exec() /usr/libexec/openssh/sftp-server debug1: Forced command (config) '/usr/libexec/openssh/sftp-server' debug1: session_new: session 0 debug1: Received SIGCHLD. debug1: session_by_pid: pid 3943 debug1: session_exit_message: session 0 channel 0 pid 3943 debug1: session_exit_message: release channel 0 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from [My ip address]: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials

    Read the article

  • Outlook refuses to connect to Exchange

    - by wfaulk
    Outlook 2007 under Windows XP connecting to Exchange 2003 SP2: when started, it flips back and forth between "Connecting to Exchange Server" and "Disconnected" three or four times, then gives up and stays disconnected. I tried deleting the ost file (which was nearly 2GB), turning Cached mode on and off, recreating the account inside the Mail control panel, changing the account to use HTTP, and probably some other things. None of it seemed to make any difference, until … After fiddling with it for a while, I got this absurd error message dialog at startup, and it exits after I click OK: Cannot start Microsoft Office Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. (I'm not sure if I can even trust that message. It's so long, it just feels like a random offset into Outlook's stack of error messages.) Either way, the Exchange server is available to everyone else, and is available via OWA from that computer. I ran Process Explorer against Outlook and it showed 5 or so ESTABLISHED connections to our Exchange server, plus listening on two UDP ports, and two CLOSE_WAIT connections to localhost. If I managed to look at Outlook's IP connections while it was doing its Connecting/Disconnected dance, it had a huge number of connections open to the Exchange server. It more than filled ProcExp's dialog box; I'm guessing at least 20, probably more. The only other odd thing is that our network admin at some point added a wildcard DNS record to the domain name that we use for email, and now Outlook will sometimes (always?) start by complaining about autodiscover.example.com's SSL certificate. There is a web server there, but it doesn't have any sort of email autodiscover anything on it. It doesn't make any difference if I click "OK" or "Cancel" (or whatever the buttons are). I also added a bogus entry for the hostname to Windows' hosts file, pointing it at 127.0.0.2, and it stopped complaining about the certificate. (The CLOSE_WAIT sockets above were from before I made this change, and went away after.) I don't think this is related, as the same problem should exist for everyone, but it might be. This is the second time this user has had this problem. The first time, I never found a solution other than reinstalling Outlook. Now that it's a pattern, I'd like to find a permanent solution, rather than assume it's a random glitch.

    Read the article

  • "Error 1067: The process terminated unexpectedly" when trying to install MySQL on Win7 x64.

    - by Gravitas
    Hi, I've run into a brick wall trying to install MySQL v5.5 on my machine. My PC is Windows 7 x64, Enterprise edition. MySQL installs fine, but when I run the "MySQL Instance Configuration Wizard", it pauses forever on the step "Start Service" (I can let it run for 30 minutes with no response). If I go into services, I see that the "MySQL" service hasn't started, and if I try to start it, it says "Windows could not start MySQL Service on Local Computer. Error 1067: The process terminated unexpectedly." I've tried the following: Turning off firewall. Uninstalling all antivirus software. Installing / reinstalling 32-bit version of MySQL. Installing / reinstalling 64-bit version of MySQL. Uninstalling, deleting the contents of "C:\program files\MySQL" and "C:\program files (x86)\MySQL", reinstalling. Checking to see that there is no rogue services named MySQL???? (from a previous install). Checking that port 3306 is not used by an alternate program. Changing the default port that MySQL uses. Checking for "my.ini" and "my.ini.cnf" in "C:\windows" (nothing there but that can cause a problem). Running both MySQL installer, and configuration wizard, in "Adminstrator mode". Turning off UAC. Installing with defaults, not changing anything. Rebooting my machine (about 6 reboots so far). Opening up port 3306 in the firewall (both TCP and UDP, inbound and outbound). Swearing at the klutz of a programmer who designed MySQL so you can't even install it (as if that would help!) My machine is working 100% in every other way. InfiniDB (a MySQL compatible database) installs 100%, as does Visual Studio 2010, Microsoft SQL Server, etc, etc. Your advice on how to work around this? p.s. Here is the screen it got stuck on for 15 minutes until I killed the process: Update 2010-12-20 Tried MySQL v5.1, it didn't work either. Its amazing - if you type "mysqld /?", or "mysqld -help", it doesn't give you any help. And, if you try to restart the service manually, it doesn't display any error messages. Could it be any more unhelpful? Update 2010-12-21 Installed MySQL 6.0 alpha, and it worked. However, I'd rather not use an alpha release, given that the "stable" release is anything but :( Update 2010-12-21 Found http://dev.mysql.com/doc/refman/5.1/en/windows-troubleshooting.html, dealing with troubleshooting under Windows. Discovered that you can generate an error log if the service doesn't start - see here: http://dev.mysql.com/doc/refman/5.1/en/error-log.html

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • How to clear a zone from a broken Bind/Named server

    - by Cerin
    I tried adding a new zone for "mydomain4.com" to my Named DNS server. However, when I went to restart it, I received the unhelpful error message: Error in named configuration: zone mydomain4.com/IN: loaded serial 3 zone mydomain3.com/IN: loaded serial 2 zone mydomain2.com/IN: loaded serial 2 zone mydomain1.com/IN: loaded serial 2 zone mydomain0.com/IN: loaded serial 6 zone localhost.localdomain/IN: loaded serial 0 zone localhost/IN: loaded serial 0 zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 zone 0.in-addr.arpa/IN: loaded serial 0 zone mydomain/IN: loaded serial 2010092201 dns_rdata_fromtext: db.10.157.10:27: near '*.mydomain4.com.': bad name (check-names) zone 10.157.10.in-addr.arpa/IN: loading from master file db.10.157.10 failed: bad name (check-names) zone 10.157.10.in-addr.arpa/IN: not loaded due to errors. _default/10.157.10.in-addr.arpa/IN: bad name (check-names) I'm confused by this, since I thought I created the new zone identically to how I created the other 4 zones. However, since I need this DNS server up, I tried deleting the new zone file at /var/named/chroot/var/named/mydomain4.com.db. However, upon trying to restart again, I received a new unhelpful error: Error in named configuration: zone mydomain4.com/IN: loading from master file mydomain4.com.db failed: file not found zone mydomain4.com/IN: not loaded due to errors. _default/mydomain4.com./IN: file not found zone mydomain3.com/IN: loaded serial 2 zone mydomain2.com/IN: loaded serial 2 zone mydomain1.com/IN: loaded serial 2 zone mydomain0.com/IN: loaded serial 6 zone localhost.localdomain/IN: loaded serial 0 zone localhost/IN: loaded serial 0 zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 zone 0.in-addr.arpa/IN: loaded serial 0 zone mydomain/IN: loaded serial 2010092201 dns_rdata_fromtext: db.10.157.10:27: near '*.mydomain4.com.': bad name (check-names) zone 10.157.10.in-addr.arpa/IN: loading from master file db.10.157.10 failed: bad name (check-names) zone 10.157.10.in-addr.arpa/IN: not loaded due to errors. _default/10.157.10.in-addr.arpa/IN: bad name (check-names) Obviously, named still thinks the zone file is being used, but I can't find where. I've tried doing: grep -lir "mydomain4" / but it doesn't find any files containing that text. How do I purge this domain from named's configs? Also, how do I figure out what caused the original error?

    Read the article

  • In search of a good audio player for Ubuntu 9.10

    - by Joe Casadonte
    If this should be marked Community Wiki, please let me know. I'm switching from XP to Ubuntu, and I have been very disappointed with the selection of media players available. I'm primarily interested in an audio player, but integrated video and library management is OK, too. My criteria: Must be able to play audio CDs (I'm shocked how many apps this does away with, right away) Must be able to play MP3 & WAV; OGG, SHN, FLAC are all bonuses Repeat and Shuffle modes are a must FreeDB / GraceNote through a proxy is a must (if it can read a PAC file, that would be awesome) It needs to be really small, e.g. skinnable or an applet Ability to execute a playlist is a plus Gapless MP3 playback a plus I'm running Gnome, but I'm not totally adverse to a KDE app. Command-line only is also a viable option. Some that I've tried: RhythmBox - probably the best of the lot that I've tried; I don't like its mini mode (doesn't show the song being played) and I can't figure out how to get it to hit FreeDB/GraceNote through a proxy Songbird - can't play CDs, playlist management is atrocious Banshee Jajuk Maybe a couple of more. Thanks! UPDATE I tried out VLC, Amarok and Songbord (again). VLC I eventually got to work (I had some kind of bad configuration). It seemed way more involved than I was looking for out of a music player, and in general more geared to video than audio. I couldn't fathom its library management, which I think it has; maybe it doesn't, and that's why I couldn't figure it out. Amaork looked very promising but the library management was not to my liking, and the way it handled a playlist with both MP3 and WAV is inexplicable at best. I did like some aspects of the UI, but not enough to keep it. Songbird is very finicky, but I like the library management. Sort of. It kept telling me my Watch folder was invalid, even thought it clearly was accessible. Playlist management is bizarre, and the message that it was deleting source files whenever I deleted a playlist had me too worried to keep using it. Had it been able to play CDs, maybe I would have persevered. Audacious, while a bit odd at times, does seem to do what I want. If it had a library manager, I wouldn't have bothered trying any of the others. Thanks for the help, everyone!

    Read the article

  • Bypassing "Found New Hardware Wizard" / Setting Windows to Install Drivers Automatically

    - by Synetech inc.
    Hi, My motherboard finally died after the better part of a decade, so I bought a used system. I put my old hard-drive and sound-card in the new system, and connected my old keyboard and mouse (the rest of the components—CPU, RAM, mobo, video card—are from the new system). I knew beforehand that it would be a challenge to get Windows to boot and install drivers for the new hardware (particularly since the foundational components are new), but I am completely unable to even attempt to get through the work of installing drivers for things like the video card because the keyboard and mouse won't work (they do work, in the BIOS screen, in DOS mode, in Windows 7, in XP's boot menu, etc., just not in Windows XP itself). Whenever I try to boot XP (in normal or safe mode), I get a bunch of balloons popping up for all the new hardware detected, and a New Hardware Found Wizard for Processor (obviously it has to install drivers for the lowest-level components on up). Unfortunately I cannot click Next since the keyboard and mouse won't work yet because the motherboard drivers (for the PS/2 or USB ports) are not yet installed. I even tried a serial mouse, but to no avail—again, it does work in DOS, 7, etc., but not XP because it doesn't have the serial port driver installed. I tried mounting the SOFTWARE and SYSTEM hives under Windows 7 in order to manually set the "unsigned drivers warning" to ignore (using both of the driver-signing policy settings that I found references to). That didn't work; I still get the wizard. They are not even fancy, proprietary, third-party, or unsigned drivers. They are drivers that come with Windows—as the drivers for CPU, RAM, IDE controller, etc. tend to be. And the keyboard and mouse drivers are the generic ones at that (but like I said, those are irrelevant since the drivers for the ports that they are connected to are not yet installed). Obviously at some point in time over the past several years, a setting got changed to make Windows always prompt me when it detects new hardware. (It was also configured to show the Shutdown Event Tracker on abnormal shutdowns, so I had to turn that off so that I could even see the desktop.) Oh, and I tried deleting all of the PNF files so that they get regenerated, but that too did not help. Does anyone know how I can reset Windows to at least try to automatically install drivers for new hardware before prompting me if it fails? Conversely, does anyone know how exactly one turns off automatic driver installation (and prompt with the wizard)? Thanks a lot.

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • Google Chrome issues

    - by Ben Hooper
    I'm an avid user of Chrome and would defend it to the death but there is no denying that it has issues, and I've been experiencing a lot of them recently. Issues Stuck tabs Issue: Around 60% of the time (higher when the system has just started), when Chrome is launched, some or all of the pinned tabs and startup pages will get stuck in the counter-clockwise-rotating "contacting server" mode and never snap out of it. Fix: Quit and launch Chrome until they load properly (this really isn't good, as quitting and launching Chrome can and has cleared all pinned tabs). Extra Information: If you stop the loading and re-enter the URL then the page will load perfectly. The amount of pinned tabs or startup pages seems to be irrelevant, but I could be wrong.   "Downloaded/out of" Issue: Each item in the download bar has a downloaded status section, formatted as "downloaded/out of". Sometimes Chrome doesn't display the "out of" part.   Collapsed settings windows Issue: In Chrome version 19(ish)+, settings are configured via overlayed / popup windows. Sometimes, the window will open fully collapsed. Fix: Resize Chrome's window or open the developer tools. "Network Error" with large downloads Issue: Sometimes, when downloading large files (500MB+) Chrome will download the entire file, the download status will freeze (for example, "1 GB/1 GB") for a few minutes, report "Network error", and delete the .crdownload file. Extra Information: The same file from the same website on the same computer downloads perfectly in other browsers. The website and file type seem to be irrelevant.   Information: I've experienced some of these issues on my home PC and some on my work PC, both of which are Windows 7 Ultimate x64. The only things that link them are my Google account (all settings are synced). Updating Chrome hasn't worked. Most of these issues presented themselves around about version 17 and have continued right through to 21 (current). Uninstalling Chrome, deleting all data in %programFiles(x86)%\Google and %localAppData%\Google, and reinstalling hasn't worked. I have yet to see whether disabling all extensions would make a difference, but it's hard to diagnose as these issues don't occur 100% of the time.   In my case, I don't know if there's an actual solution. I'm just curious as to whether anyone else is having similar issues to those that I'm experiencing. At least then I'll know it's an issue with Chrome itself.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >