Search Results

Search found 15249 results on 610 pages for 'tv shows'.

Page 437/610 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

  • ubuntu 9.04 pptp broken after a power failure

    - by kevin42
    I have a small Ubuntu 9.04 router setup as a NAT box and a PPTP server. After a power failure everything except the PPTP server still works. A windows client gets to "registering your computer on the network" but then says Error 742: The remote computer does not support the required data encryption type. I did some research and I think the problem is with the ppp_mppe module. When I try to run 'modprobe ppp_mppe' it hangs indefinitely. What would cause this hang? Any ideas how I can troubleshoot this further? Thanks for the help! UPDATE: I am still having the problem, however I have found some more information. When the first user tries to connect to pptp, the process list shows modprobe sha1 running, and one instance of modprobe ppp_mppe for each connection attempt. If I killall modprobe at this point the next connection attempt works, and everything is fine until the next reboot. I'm planning to do a clean install at some point in the future but I'd really like to get to the real cause of this.

    Read the article

  • VMWare Converter - Remote Linux Install P2V

    - by Zed Said
    I am trying to get what I thought was an easy process working. Here is my situation. I have a remote linux server (running Debian) that I would like to turn into a VM so that I can do testing on it rather than the production server. I downloaded VMWare Converter. I went to the "convert machine" wizard, chose Powered-on machine, entered my remote machine details. Clicking next shows that it correctly connects to my remote linux box. Now, on the next screen, for Destination type, it enters "WMWare Infrastructure virtual machine", and I can't change this. This is where I am stuck. Why do I need another sever to convert with? Is this where my VM gets sent to? And why can't I just convert the VM and save it on the computer that I am running the converter program on? If I do have to use a vmware infrastructure destination, I am confused at what ESX is, why and how I use it. Any help with this process would be more than appreciated!

    Read the article

  • Poor home office network performance and cannot figure out where the issue is

    - by Jeff Willener
    This is the most bizarre issue. I have worked with small to mid size networks for quite a long time and can say I'm comfortable connecting hardware. Where you will start to lose me is with managed switches and firewalls. To start, let me describe my network (sigh, shouldn't but I MUST solve this). 1) Comcast Cable Internet 2) Motorola SURFboard eXtreme Cable Modem. a) Model: SB6120 b) DOCSIS 3.0 and 2.0 support c) IPv4 and IPv6 support 3-A) Cisco Small Business RV220W Wireless N Firewall a) Latest firmware b) Model: RV220W-A-K9-NA c) WAN Port to Modem (2) d) vlan 1: work e) vlan 2: everything else. 3-B) D-Link DIR-615 Draft 802.11 N Wireless Router a) Latest firmware b) WAN Port to Modem (2) 4) Servers connected directly to firewall a) If firewall 3-A, then vlan 1 b) CAT5e patch cables c) Dell PowerEdge 1400SC w/ 10/100 integrated NIC (Domain Controller, DNS, former DHCP) d) Dell PowerEdge 400SC w/ 10/100/1000 integrated NIC (VMWare Server) 4) Linksys EZXS88W unmanaged Workgroup 10/100 Switch a) If firewall 3-A, then vlan 2 b) 25' CAT5e patch cable to firewall (3-A or 3-B) c) Connects xBox 360, Blu-Ray player, PC at TV 5) Office equipment connected directly to firewall a) If firewall 3-A, then vlan 1 b) ~80' CAT6 or CAT5e patch cable to firewall (3-A or 3-B) c) Connects 1) Dell Latitude laptop 10/100/1000 2) Dell Inspiron laptop 10/100 3) Dell Workstation 10/100/1000 (Pristine host, VMWare Workstation 7.x with many bridged VM's) 4) Brother Laser Printer 10/100 5) Epson All-In-One Workforce 310 10/100 5-A) NetGear FS116 unmanaged 10/100 switch a) I've had this switch for a long time and never had issues. 5-B) NetGear GS108 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-C) Linksys SE2500 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-D) TP-Link TL-SG10008D unmanaged 10/100/1000 a) Bought new for this issue and still have. 6) VLan 1 Wireless Connections (on same subnet if 3-B) a) Any of those at 5c b) HP Laptop 7) VLan 2 Wireless Connection (on same subnet if 3-B) a) IPad, IPod b) Compaq Laptop c) Epson Wireless Printer Shew, without hosting a diagram I hope that paints a good picture. The Issue The breakdown here is at item 5. No matter what I do I cannot have a switch at 5 and have to run everything wireless regardless of router. Issues related to using a switch (point 5 above) SpeedTest is good. Poor throughput to other devices if can communicate at all. Usually cannot ping other devices even on the same switch although, when able, ping times are good. Eventual lose of connectivity and can "sometimes" be restored by unplugging everything for several days, not minutes or hours but we're talking a week if at all. Directly connect to computer gives good internet connection however throughput to other devices connected to firewall is at best horrible. Yet printing doesn't seem to be an issue as long as they are connected via wireless. I have to force the RV220W to 1000Mb on the respective port if using a Gig Switch Issues related to using wireless in place of a switch (point 5 above) Poor throughput to other devices if can communicate. SpeedTest is good. Bottom line Internet speeds are awesome. By the way, Comcast went WAY above and beyond to make sure it was not them. They rewired EVERYTHING which did solve internet drops. Computer to computer connections are garbage Cannot get switch at 5 to work, yet other at 4 has never had an issue. Direct connection, bypass switch, is good for DHCP and internet. DNS must be on server, not firewall. Cisco insists its my switches but as you can see I have used four and two different cables with the same result. My gut feeling is something is happening with routing. But I'm not smart enough to know that answer. I run a lot of VM's at 5-c-3, could that cause it? What's different compared to my previous house is I have introduced Gigabit hardware (firewall/switches/computers). Some of my computers might have IPv6 turned on if I haven't turned it off already. I'm truly at a loss and hope anyone has some crazy idea how to solve this. Bottom line, I need a switch in my office behind the firewall. I've changed everything. The real crux is I will find a working solution and, again, after days it will stop working. So this means I cannot isolate if its a computer since I have to use them. Oh and a solution is not throwing more money at this. I'm well into $1k already. Yah, lame.

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Vboxheadless without a command prompt (VirtualBox)

    - by joe
    I'm trying to run VirtualBox VM's in the background from a service. I'm having trouble starting a process the way I desire. I'd like to start the virtualbox guest in headless mode as a separate process and show nothing as far as GUI. Here's what I've tried: From command line: start vboxheadless -s "Ubuntu Server" In C#: ProcessStartInfo info = new ProcessStartInfo { UseShellExecute = false, RedirectStandardOutput = true, ErrorDialog = false, WindowStyle = ProcessWindowStyle.Hidden, CreateNoWindow = true, FileName = "C:/program files/sun/virtualbox/vboxheadless", Arguments = "-s \"Ubuntu Server\"" }; Process p = new Process(); p.StartInfo = info; p.Start(); String output = p.StandardOutput.ReadToEnd(); //BLOCKS! (output stream isnt closed) I want to be able to get the output to know if starting the server was a success. However, it seems as though the window that's spawned never closes its output stream. It's also worth mentioning that I've tried using vboxmanage startvm "Ubuntu Server" --type=vrdp. I can determine whether the server started properly using this. But it shows a new command prompt window for the newly started VirtualBox guest.

    Read the article

  • Grant access for users on a separate domain to SharePoint

    - by Geo Ego
    Hello. I just completed development of a SharePoint site on a virtual server and am currently in the process of granting users from a different domain to the site. The SharePoint domain is SHAREPOINT, and the domain with the users I want to give access to is COMPANY. I have provided them with a link to the site and added them as users via SharePoint, which is all I thought I would need to do. However, when they go to the link, the site shows them a SharePoint error page. In the security event log, I am showing the following: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:11:49 AM User: COMPANY\ThisUser Computer: SHAREPOINT Description: Object Open: Object Server: Security Account Manager Object Type: SAM_ALIAS Object Name: DOMAINS\Account\Aliases\00000404 Handle ID: - Operation ID: {0,1719489} Process ID: 416 Image File Name: C:\WINDOWS\system32\lsass.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: ThisUser Client Domain: PRINTRON Client Logon ID: (0x0,0x1A3BC2) Accesses: AddMember RemoveMember ListMembers ReadInformation Privileges: - Restricted Sid Count: 0 Access Mask: 0xF Then, four of these in a row: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:12:08 AM User: NT AUTHORITY\NETWORK SERVICE Computer: SHAREPOINT Description: Object Open: Object Server: SC Manager Object Type: SERVICE OBJECT Object Name: WinHttpAutoProxySvc Handle ID: - Operation ID: {0,1727132} Process ID: 404 Image File Name: C:\WINDOWS\system32\services.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: NETWORK SERVICE Client Domain: NT AUTHORITY Client Logon ID: (0x0,0x3E4) Accesses: Query status of service Start the service Query information from service Privileges: - Restricted Sid Count: 0 Access Mask: 0x94 Any ideas what permissions I need to grant to the user to get them access to SharePoint?

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

  • Enable RemoteApp Full Desktop programmatically

    - by Scott Chamberlain
    I am writing a powershell script to set up some HyperV VM's however there is one step I am having trouble automating. How do I check the box to allow Remote desktop access from the RemoteApp settings programmatically? I can set up all of my customizations I need by doing #build the secrity descriptor so the desktop only shows up for people who should be allowed to see it $remoteDesktopUsersSid = New-Object System.Security.Principal.SecurityIdentifier($remoteDesktopUsersGroup.objectSid[0],0) $aceTemplet = 'O:WDG:WDD:ARP(A;CIOI;CCLCSWLORCGR;;;{0})' $securityDescriptor = $aceTemplet -f $remoteDesktopUsersSid #get a copy of the WMI instance $tsRemoteDesktop = Get-WmiObject -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop #set settings $tsRemoteDesktop.Name = $ServerDisplayName $tsRemoteDesktop.SecurityDescriptor = $securityDescriptor $tsRemoteDesktop.IconPath = $IconPath $tsRemoteDesktop.IconIndex = $IconIndex #push settings back to server Set-WmiInstance -InputObject $tsRemoteDesktop -PutType UpdateOnly however the instance of that WMI object does not exist until after you have the above box checked. I attempted to use Set-WmiInstance to instantiate and set the settings at the same time but I keep getting errors like: Set-WmiInstance : At line:53 char:16 + Set-WmiInstance <<<< -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop -Arguments @{Alias='TSRemoteDesktop';Name=$ServerDisplayName;ShowInPortal=$true;SecurityDescriptor=$securityDescriptor} + CategoryInfo : NotSpecified: (:) [Set-WmiInstance], ArgumentException + FullyQualifiedErrorId : System.ArgumentException,Microsoft.PowerShell.Commands.SetWmiInstance (also after running the command and getting the error it will delete the instance of Win32_TSRemoteDesktop if it already exited and un-check the box in the properties setting) Is there any way to programmatically check that box or can anyone help with why Set-WmiInstance throws that error?

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • CD and DVD burning very slow with LG DVD Burner

    - by Om Nom Nom
    I have an HP m7560in Desktop which came with an LG DVD burner. As far back as I can remember, it has been having problems in burning discs. It used to take about one and a half hour to burn 4 GB of data on a blank DVD-R. I am using Nero and ImgBurn and I usually select 8x speed for burning. I used to run XP earlier and now I run Windows 7 on it, but I face the problem on both. I have already checked that DMA mode is selected in the IDE controllers' properties. On XP, uninstalling the IDE controllers and DVD drive from the Device Manager, rebooting and letting it install the drivers again fixed the problem for a while, but after a couple reboots, the problem used to come back. On Windows 7, even doing that is not fixing the problem. In fact, in Windows 7, the burning process doesn't even begin. Imgburn gets stuck on the first stage (of writing Lead-In), and it never progresses until I turn off / reset the computer. I checked the disc info after this happened, and the disc still shows empty and ready to be burned on, so it's obvious that it didn't even touch the disc. I have already had the drive replaced once by HP, but it didn't fix the problem. Do you guys have any suggestions? (Note that Windows Updates doesn't give any newer driver for the controllers, or any other components for that matter).

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Java plugin in the browser doesn't work even though it is enabled

    - by Pratyush Nalam
    I installed the Java Development Kit (64-bit) recently and saw it includes the JRE plugin for 64-bit as well. But, since Firefox is 32-bit, I also installed JRE 32-bit version. This is what is shown in Programs and Features. Now, the problem is, the other day, I opened a site which required the Java plugin. The frame showed the regular Java loading animation and hung. Nothing happened after that. Like this: I checked Firefox's plugins section and it shows Java is enabled, so no issue there I tried other browsers - IE10 and Chrome but to no avail. It doesn't work anywhere. I saw another question which said that you have to install 64-bit then 32-bit. That's what I actually did as well. First, installed JDK 7 64-bit (which includes JRE 7 64-bit) and then installed JRE 7 32-bit. I even tried the Java website's Do I have Java? section and over there too, it just keeps spinning for ages (I have waited for more than 10-20 seconds). How do I go about now? This never happened to me in Windows 7. I am on Windows 8 Pro.

    Read the article

  • mod_rewrite: redirect from subdomain to main domain

    - by Bald
    I have two domains - domain.com and forum.domain.com that points to the same directory. I'd like redirect all request from forum.domain.com to domain.com (for example: forum.domain.com/foo to domain.com/forum/foo) without changing address in addres bar (hidden redirect). I wrote something like this and put it into .htaccess file: Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^forum\.example\.net$ RewriteRule (.*) http://example.com/forum/$1 [L] RewriteCond %{REQUEST_FILENAME} !-s [NC] RewriteCond %{REQUEST_FILENAME} !-d [NC] RewriteRule ^(.+) index.php/$1 [L] That works only if I add Redirect directive: RewriteRule (.*) http://example.com/forum/$1 [R,L] But it changes previous address in address bar. EDIT: Ok, let's make it simple. I added those two lines at the end of the c:\windows\system32\drivers\etc\hosts on my local computer: 127.0.0.3 foo.net 127.0.0.3 forum.foo.net Now, I created two virtual hosts: <VirtualHost foo.net:80> ServerAdmin [email protected] ServerName foo.net DocumentRoot "C:/usr/src/foo" </VirtualHost> <VirtualHost forum.foo.net:80> ServerAdmin [email protected] ServerName forum.foo.net DocumentRoot "C:/usr/src/foo" </VirtualHost> ..and directory called "foo", where i put two files: .htaccess and index.php. Index.php: <?php echo $_SERVER['PATH_INFO']; ?> .htaccess: Options +FollowSymlinks RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} ^forum\.foo\.net$ RewriteCond %{REQUEST_URI} !^/forum/ RewriteCond %{REQUEST_FILENAME} !-s [NC] RewriteCond %{REQUEST_FILENAME} !-d [NC] RewriteRule ^(.+)$ /index.php/forum/$1 [L] RewriteCond %{HTTP_HOST} !^forum\.foo\.net$ RewriteCond %{REQUEST_FILENAME} !-s [NC] RewriteCond %{REQUEST_FILENAME} !-d [NC] RewriteRule ^(.+) index.php/$1 [L] When I type address http://forum.foo.net/test in address bar, it displays /forum/test which is good. http://foo.net/a/b/c shows /a/b/c which is good. But! http://forum.foo.net/ displays empty value (should display /forum).

    Read the article

  • Thunderbird doesn't show folders on a new Dovecot install

    - by Zoran Zaric
    Hey, I set up a new mailserver with postfix and Dovecot some days ago, everything is working except for Thunderbird not showing any folders. Evolution shows me all folders. I migrated from a Courier install using imapsync. In the filesystem the folders don't have a INBOX in their name, so the tho folders ar called .Folder 1 not .INBOX.Folder 1. This is the output of dovecot -n: # 1.0.10: /etc/dovecot/dovecot.conf Warning: mail_extra_groups setting was often used insecurely so it is now deprecated, use mail_access_groups or mail_privileged_group instead base_dir: /var/run/dovecot/ log_timestamp: “%Y-%m-%d %H:%M:%S ” protocols: imap pop3 listen(default): *:143 listen(imap): *:143 listen(pop3): *:110 disable_plaintext_auth: no login_dir: /var/run/dovecot//login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login first_valid_uid: 1001 last_valid_uid: 1001 mail_extra_groups: vmail mail_access_groups: vmail mail_location: maildir:/var/vmail/%d/%u maildir_copy_with_hardlinks: yes mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 pop3_uidl_format(default): pop3_uidl_format(imap): pop3_uidl_format(pop3): %08Xu%08Xv auth default: user: nobody passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: sql args: /etc/dovecot/dovecot-sql.conf socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 432 user: vmail group: vmail Thanks!

    Read the article

  • Bad Performance when SQL Server hits 99% Memory Usage

    - by user15863
    I've got a server that reports 8 GB of ram used up at 99%. When restart Sql Server, it drops down to about 5% usage, but gradually builds back up to 99% over about 2 hours. When I look at the sqlserver process, its reported as only using 100k ram, and generally never goes up or below that number by very much. In fact, if I add up all the processes in my TaskManager, it's barely scratching the surface of my total available (yet TaskManager still shows 99% memory usage with "All processes shown"). It appears that Sql Server has a huge memory leak going on but it's not reporting it. The server has ran fine for nearly two years, with this only starting to manifest itself in the last 3-4 weeks. Anyone seen this or have any insight into the problem? EDIT When the server hits 99%, performance goes down hill. All queries to the server, apps, etc. come to a crawl. Restarting the service makes things zippy again, until 2 hours has passed and the server hits 99% once again.

    Read the article

  • How to remove strict RSA key checking in SSH and what's the problem here?

    - by setatakahashi
    I have a Linux server that whenever I connect it shows me the message that changed the SSH host key: $ ssh root@host1 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 93:a2:1b:1c:5f:3e:68:47:bf:79:56:52:f0:ec:03:6b. Please contact your system administrator. Add correct host key in /home/emerson/.ssh/known_hosts to get rid of this message. Offending key in /home/emerson/.ssh/known_hosts:377 RSA host key for host1 has changed and you have requested strict checking. Host key verification failed. It keeps me for a very few seconds logged in and then it closes the connection. host1:~/.ssh # Read from remote host host1: Connection reset by peer Connection to host1 closed. Does anyone know what's happening and what I could do to solve this problem?

    Read the article

  • Subversion 1.6 + SASL : Only works with plaintext 'userPassword'?

    - by SiegeX
    I'm attempting to setup svnserve with SASL support on my Slackware 13.1 server and after some trial and error I'm able to get it to work with the configuration listed below: svnserve.conf [general] anon-access = read auth-access = write realm = myrepo [sasl] use-sasl = true min-encryption = 128 max-encryption = 256 /etc/sasl2/svn.conf pwcheck_method: auxprop auxprop_plugin: sasldb sasldb_path: /etc/sasl2/my_sasldb mech_list: DIGEST-MD5 sasldb users $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP test@myrepo: userPassword You'll notice that the output of sasldblistusers2 shows my test user as having both an encrypted cmusaslsecretOTP password as well as a plain text userPassword passwd. i.e., if I were to run strings /etc/sasl2/my_sasldb I would see the test users' password in plaintext. These two password entries were created with the following subversion book recommended command: saslpasswd2 -c -f /etc/sasl2/my_sasldb -u myrepo test After reading man saslpasswd2 I see the following option: -n Don't set the plaintext userPassword property for the user. Only mechanism-specific secrets will be set (e.g. OTP, SRP) This is exactly what I want to do, suppress the plain text password and only use the mechanism-specific secret (OTP in my case). So I clear out /etc/sasl2/my_sasldb and rerun saslpasswd2 as: saslpasswd2 -n -c -f /etc/sasl2/my_sasldb -u myrepo test I then follow it up with a sasldblistusers2 and I see: $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP Perfect! I think, now I have only encrypted passwords.... only neither the Linux svn client nor the Windows TortoiseSVN client can connect to my repo anymore. They both present me with the user/pass challenge but that's as far as I get. TLDR So, what is the point of SVN supporting SASL if my sasldb must store its passwords in plaintext to work?

    Read the article

  • Sendmail delivering locally instead of to MTA in MX record

    - by CreativeNotice
    Ok, so I've got a box named websrv1.mydomain.com. It's a web server running ubuntu, apache2, sendmail, etc. My email is outsourced to a third party. So in my DNS I've got MX set to mx.thirdparty.net. I've no reason to accept incoming mail on my web server, every email should be sent to the third party. This works correctly accept with sending mail from the webserver (aka via cron or console). So from my web server, if I send an email to [email protected], it just disappears. No errors, nothing in dead.letter, nothing. I can send to any other address with no issues. If I send to [email protected] it's delivered locally which is fine. 1) Doing an nslookup shows the mx record is correct. 2) Running /mx mydomain.com from sendmail -bt returns the correct result. 3) Running sendmail -bv [email protected] returns: sudo sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host mydomain.com., user [email protected] 4) Running 3,0 [email protected], returns: 3,0 [email protected] canonify input: me @ mydomain . com Canonify2 input: me Canonify2 returns: me canonify returns: me parse input: me Parse0 input: me Parse0 returns: me Parse1 input: me MailerToTriple input: me MailerToTriple returns: me Parse1 returns: $# esmtp $@ mydomain . com . $: me parse returns: $# esmtp $@ mydomain . com . $: me So I'm at a loss. Sendmail seems to see the mx record, but it's not using it.

    Read the article

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • Leopard Network Shares and browsing are unreliable

    - by EvilChookie
    I have two macs, running Leopard 10.5.8. One is the 13" MBP connected via WiFi, and the other is a 24" 2008 iMac, connected via ethernet. There are at least another 6-10 machines (windows and mac) awake on the network (with shares) at any given time, yet there are plenty of times where I cannot see any devices/shares in either my "Shared" section in Finder, nor can I see any computers in "Network" in Finder. Restarting doesn't help. I've restarted all the networking gear in the house to no avail. Our network is a series of gigabit switches connected to a D-Link gaming router. I believe we use OpenDNS, and our provider is Cox. I hate having to use "Go - Connect to Server" to browse to commonly used file shares (by IP). I'd like to know why my shares do not always and consistently appear in Leopard. Edit: I ran OnyX this morning, and performed the cleaning and maintenance operations (including disk permissions) on both my Macs, and at least one of my macs has started showing network devices again. (the other is still going). No idea how long this will last. Any ideas as to what is causing this issue, and how to prevent it? Edit 2: Aaaand there the shares go again. So running OnyX is not a permanent or reliable fix for this issue. Edit 3: After a clean reinstall and update, network shares are still unreliable. The SMBClient command mentioned in comments shows me the information it's supposed to show, but the shares do not appear in the shared section. They'll also vanish at random and reappear at random throughout the day.

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Outlook users connected to exchange can email from other email accounts

    - by Sherriffwoody
    We have found an issue on our systems whereby an outlook user (both 2007 and 2010) connected to our Exchange server (2007) can send emails as other users using the following steps Within Outlook Click <New Email> Select the <From> button to show a list of accounts outlook contains, but it also shows the option Select<Other Email Address>. This brings up a small dialog box with another button which when selected allows the user to select an email from their contacts or the Active Directory. The user in most cases can select any email within the Active Directory and send an email as if it were coming from that selected email. It seems not everyone has this ability and I'm guessing it is something to do with settings in exchange or AD(version 6) or is there a group policy that can be implemented to stop users being able to do this. We have no idea what allows this and I have failed to find anything using Dr Google. No one has setup delegates within outlook but it does seem to be something similar? Does anyone know how to lock this down? Thanks in advance

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >