Search Results

Search found 13697 results on 548 pages for 'linking errors'.

Page 417/548 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Does Xenapp require Windows Terminal Services (Remote Desktop) licenses?

    - by John Virgolino
    We have a Xenapp 5.x server running for over a year now. It does not have any purchased Terminal Services (Remote Desktop) licenses installed. It is running on a Windows 2008 Server box. I am aware that Terminal Services runs fine for about 3 months and then supposedly stops issuing licenses. On occasion, Xenapp stops working and we see lots of License errors in the event log, although not necessarily every time. In most cases, a reboot or 2 resolves the problem. We figured it was because of the lack of TS licenses. I spoke with Citrix and they said we had to have the licenses, but it begs the question that if we have to have the licenses, how does it work the majority of the time without them!!?? I have not received a straight answer yet and before I tell my client to shell out more money, I need to understand the technical reasoning for how this is actually working if we are breaking the rules here. We will buy the licenses if necessary, but there has to be an explanation for this. I am hoping the community can help where Citrix apparently cannot. Thanks much!

    Read the article

  • install Oracle’s VirtualBox

    - by Shamith c
    I am trying to install Oracle’s VirtualBox. I used sudo dpkg -i virtualbox-4.2_4.2.4-81684\~Ubuntu\~quantal_i386.deb Getting following errors (Reading database ... 226237 files and directories currently installed.) Preparing to replace virtualbox-4.2 4.2.4-81684~Ubuntu~quantal (using virtualbox-4.2_4.2.4-81684~Ubuntu~quantal_i386.deb) ... Unpacking replacement virtualbox-4.2 ... dpkg: dependency problems prevent configuration of virtualbox-4.2: virtualbox-4.2 depends on libc6 (>= 2.15); however: Version of libc6 on system is 2.13-20ubuntu5. virtualbox-4.2 depends on libqtcore4 (>= 4:4.8.0); however: Version of libqtcore4 on system is 4:4.7.4-0ubuntu8.1. virtualbox-4.2 depends on libqtgui4 (>= 4:4.8.0); however: Version of libqtgui4 on system is 4:4.7.4-0ubuntu8.1. dpkg: error processing virtualbox-4.2 (--install): dependency problems - leaving unconfigured Processing triggers for ureadahead ... Processing triggers for shared-mime-info ... How to solve it?

    Read the article

  • Is it possible to command a common router without using the web interface?

    - by MDeSchaepmeester
    Some background The internet arrangement in my student home is really weird. There is one ethernet outlet and several wifi hotspots. Either way requires a login through a web site to get internet access. This is annoying as each device needs to login seperately and with a PS3 for example, it is impossible to get connected at all since the web login procedure doesn't work. Therefore I have installed a D-Link DIR-635 router which is connected to the ethernet outlet. It has DHCP enabled so it uses NAT, but whatever it is connected to also uses NAT and I've read this should not work. A fellow student tried it with an Apple Airport but that keeps giving errors related to NAT after NAT. Anyway my setup does work so bonus points if you can clarify this. I need to login to the web site I mentioned earlier with any device, after which all devices in my LAN have connectivity. This is great. Except... In short From time to time, I lose internet connectivity and my D-Link DIR-635 router needs to do a DHCP renew. I can do this via the web interface but my life would be easier if I could just run a cmd file which tells my router to do this without all the hassle. This would setup a connection to my router and execute the proper command. I have tried googling but couldn't find much helpful stuff.

    Read the article

  • Is this DVD drive broken? Brand new, i need help convincing

    - by acidzombie24
    I am asking bc i know dell is going to give me a problem. How do i know if my DVD is broken on my laptop? i burnt 4 DL disc and they ALL failed, i called and dell suggested roxio. I used it and burnt 1 disc without error and the 2nd disc with an error. With both apps there were no 'problems' during the burning process only failed on the verification process. Some of these bad disc dont work on other PCs and one locks up windows when i click a specific file. Does that sound like a broken burner to you guys? when i called dell they told me since it can read disc properly 100% of the time and software doesnt fail in the burning process its not a broken drive _. They forward me to software support who demand a fee (i think $100) to help me fix my software. I am annoyed bc i dont want to be on the phone for them to watch me burn a dvd and since i burned it once correctly i dont want to happen to burn correctly again to have them say they solved my problem (doing nothing) and charge me refusing to refund. -edit- The errors i got were 1) the request could not be performed because of an I/O device error 2) Windows locking up when opening 1 specific file 3) Cannot copy : Data error (crc) NOTE: the file that causes the problems are random every disc

    Read the article

  • Power surge PC damage: How can I test all components of my PC without access to a second computer?

    - by Doug T.
    Ever since we had some crazy power surges last week my 64 bit Windows 7 PC has been acting strange. My USB network adapter disconnects from the wireless and can't detect the signal. I have to disable/reenable the adapter to detect it again. Also my wife has reported that the PC has rebooted a few times while I'm not sitting at it. Today I finally caught the reboot while I was using the PC. I got this blue screen of death. Stop Code 0x00000109: "Modification of system code or a critical data structure was detected." I followed the advice at the linked article and ran a memory test. I used memtest86 and its already found around 300,000 errors out of 8 gigs of ram. Now I'm worried -- what are the odds this is isolated to just my memory and not just a system wide problem? Isn't there a good chance that many other components are fried? More importantly, how can I test those other components? Are there tools similar to memtest I can use to test my motherboard/video card/power supply? If these are vender specific, is it typical for vendors to provide testing tools?

    Read the article

  • can't install anything anymore with apt-get

    - by Aymane Shuichi
    Welcome this is the log I have when trying to install anything (php5-fpm after removing it) apt-get install php5-fpm Reading package lists... Done Building dependency tree Reading state information... Done php5-fpm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up php5-fpm (5.4.4-14+deb7u10) ... insserv: warning: script 'S55IptabLes' missing LSB tags and overrides insserv: warning: script 'S55IptabLex' missing LSB tags and overrides insserv: There is a loop between service IptabLes and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service networking at depth 7 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: There is a loop between service rc.local and mountall if started insserv: loop involving service mountall at depth 6 insserv: loop involving service checkfs at depth 5 insserv: loop involving service kbd at depth 11 insserv: There is a loop between service rc.local and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 7 insserv: loop involving service urandom at depth 9 insserv: There is a loop between service IptabLes and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rc.local if started insserv: There is a loop at service IptabLes if started insserv: Starting IptabLes depends on rc.local and therefore on system facility `$all' which can not be true! (x99 times repeated ) insserv: Max recursions depth 99 reached insserv: loop involving service postfix at depth 2 insserv: There is a loop between service IptabLes and udev if started insserv: loop involving service mountkernfs at depth 1 insserv: loop involving service IptabLes at depth 1 Now here is the error i get insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing php5-fpm (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: php5-fpm E: Sub-process /usr/bin/dpkg returned an error code (1) The biggest operation I held before this was updating nginx from 1.2 to 1.6 and it was thanks to this site : here is the link : How to upgrade nginx from 1.2 to 1.6 on debian 7 Please help !

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Postfix not delivering from external senders and not logging anything

    - by simendsjo
    Some semi-recent upgrades must have broken my postfix+dovecot configuration, but I'm having problems finding out what the cause is. My domain is simendsjo.me with the MX record mail.simendsjo.me. I can send mail to both local and external recipients, and it delivers mail from internal mailboxes. The problem is that mail from external senders isn't delivered, and nothing is logged at all. The external sender also doesn't receive any errors. I have no idea where to ever start looking as nothing is logged at all when external mail is sent to my server. So the first issue would be: How can I turn on some debug messages for postfix? I've tried: debug_peer_level = 2 debug_peer_list = simendsjo.me .. And _level = 999 and _list = gmail.com where I'm trying to send emails from. but nothing is logged. When sending mails from a local mailbox (but from an outside computer, not localhost), a lot is logged. I don't have any rules in iptables either. Any ideas how I can get some debug messages for postfix?

    Read the article

  • Oracle Linux screen freezes during installation

    - by Fearless
    I was installing Oracle Linux 6.4 on a server, and the screen suddenly froze. Here were the previous steps: I put in the disk, clicked install, checked the disk (no errors), did pre-install setup (clock, root password, host+domain name, etc.), configured two 40GB hard drives in a RAID1 array (no swap, 3100mb encrypted raid partitions, ~100mb ext4 partition mounting to /boot, encrypted ext4 RAID device with mounting to /), selected packages, hit continue. The system did its short preinstall processes, then when to the main installation screen with the long status bar. The installer proceeded like always, but around package 250 out of ~1000, the screen suddenly went black with a text cursor in the upper left corner of the screen and the mouse cursor in its previous place. Neither cursor moved and the only thing that triggered a response was a ctrl-alt-delete that rebooted it. I have run this in VMs before without this issue. Memtest hasn't reported anything, and the media check went smoothly. The machine has supported Ubuntu server without issues before. Any ideas? I have tried booting after that, but the grub bootloader tries to find fd0 for some reason (I have no idea why it would search for the floppy disk). UPDATE My server successfully installed, but won't boot up. I think that, for some reason, it is still using the old bootloader from the previous installation. Any ideas on how to fix that?

    Read the article

  • Create new vms with a template with a csv. Possible?

    - by EdConde
    I am new to Powershell and Powercli... but i manager few ESX environments and really would like to do as much as possible via powershell. I am trying to do as much as i can via Powershell. On with the help I need: I used this one liner to create VMs from templates. But the problem is there has to be some user input after each new VM is created. New-VM name -Template template -VMHost VMHost -Datastore Datastore What i would like to do is be able to import via CSV the name of the new vm, the template to use, the host to put the new vm and the datastore all from a CSV. I don't know if it is as easy as below, but i kept getting errors. Import-Csv "C:\powershell\Data\VM2Create.csv" | Foreach-object{ New-VM $.name -Template $.template -VMHost $.VMHost -Datastore $.Datastore} I know there some () or {} or possibly | that need... just don't know where to put them... The csv i think would look like this: name, template, vmhost, datastore Any help or thoughts would be much appreciated...

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • Print job leaves queue but document isn't printed

    - by midnightstar
    I'm dealing with an HP Deskjet F380 All-in-One printer. It's connected via USB to a desktop running Windows 7 Enterprise x64. If I attempt to print something like a web page or a word document, the print job will show up in the print queue and the printer would stir. By stir, I mean, it would seem to prepare itself to print. However, the print job would then leave queue (I'm thinking the computer sees it as completed) and the printer would never actually print anything. However I went into Printers and Devices under the Windows start menu, into printer properties, and print a test page, the test page would print out successfully. I attempted to uninstall and re-install the printer drivers for the printer, but the printer would continue the same behavior afterwards. I also connected the printer to another computer and the printer will print just about anything. I also checked to make sure that the computer the printer needs to be connected to was update to date as far as the OS. The machine is fully up to date. I played with the way the computer handles printer spooling. Under the printer properties, under the "Advanced" tab, I had the print job print directly to the printer. In all these instances, the same behavior continues. I've restarted the printer spooling service. I've also gone under C:\Windows\System32\spool\PRINTERS and deleted files that were sitting in the folder. I have ran SFC /scannnow and the system found no errors in the system's integrity. I had the computer and printer make a cold reboot individually. The only lead I really have going for me is that since the printer prints on other PCs, I can only assume that there is something wrong with the way the PC is configured.

    Read the article

  • Corsair SSD appears completely blank and does not retain written data

    - by ebanders
    I have a 180GB Corsair SSD (model# CSSD-F180GB2-BRKT) as the primary drive in a Windows laptop. Recently the machine became unbootable after installing Windows updates. Windows installed updates before the machine shut down and the next time the machine boot up it complained about not being able to find a bootable device. After finding fixmbr unsuccessful at making the machine unbootable, I investigated a little within knoppix. Fdisk revealed an empty partition table. A scan by Testdisk came up empty. And finally 'head -c 1024 | hd' reveals all zeros. Creating a primary partition spanning the whole disk completes successfully, but after a reboot the disk appears empty again. dmesg reveals no read or write errors. smartctl indicates that the drive is healthy- although the SMART attribute values do not appear to be read properly. "Data Page | WARNING: PREVIOUS ATTRIBUTE HAS TWO" and "Threshold Page | INCONSISTENT IDENTITIES IN THE DATA" messages appear within the table of values. I don't have much experience with SSDs. Is this drive dead or something? Can anyone recommend any diagnostic tools that may be suited for diagnosing SSDs?

    Read the article

  • Windows XP Freezes

    - by Jim Fell
    Hello. I'm running a machine with Windows XP Professional 64-bit. Every so often, it will freeze for no apparent reason. That is, everything stops responding, except the mouse. I can move the mouse around, but I can't click on anything. Keyboard input is also not accepted/received when this problem occurs. The three-finger salute fails to bring up the Task Manager. Even pressing the power button on my computer fails to shut it down. The only way out of this that I have found is to hard-reboot the machine (i.e. pull power or hold power button in for 10 seconds). This problem was occurring on the system when it had all its updates and after a fresh install when not everthing was quite yet updated. I've run the Scandisk utility and the latest version Memtest86 that supports 64-bit architecture; neither found any errors. The last time this happened was on a fresh install of Windows. Only Nero Essentials, Avast antivirus (disabled), Firefox, and Spybot were installed. I was not running Nero, Firefox, or Spybot at the time, and Avast was disabled, so I'm pretty certain this is a Windows issue. Is anybody familiar with this problem or have any pointers? Thanks.

    Read the article

  • Long running php script hangs/terminates on IIS 7.5

    - by Rich
    I'm a bit of nube when it comes to configuring IIS 7.5 and Php so apologies if this is a silly question but I've been wrestling with this for over half the day and need some fresh input. I have a php application running on IIS 7.5 , php 5.4 running as fastcgi. The application works absolutley fine with the exception that long running php scripts seem to hang; no 500 error they simply seem never complete and return the results to the browser. I've written a simple test script below to eliminate the possibility of programming error in the main app : <?php /* test timeout */ /*set_time_limit(110);*/ echo "Testing time out in seconds\n"; for ($i = 0; $i < 175; $i++) { echo $i." -- "; if(sleep(1)!=0) { echo "sleep failed script terminating"; break; } } ?> If I run the script beyond 175 seconds it hangs. Below that it will return the results to the browser. Here are the time out parameters that I've set for php and fastcgi. I've also played around setting these really low in order to get various time out errors and have succeeded which brings me to the conclusion that there's another setting that I'm missing .. perhaps. fastcgi activity timeout=800 Idle Timeout = 900 request Timeout 800 Php max_execution_time=700 Any solutions or pointers in the right direction would be very ... very welcome. Thanks

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • Setting up DNS using BIND

    - by dupdupdup
    i have troubles setting up my db files. Please kindly point me in the right direction! i need to define a nameserver that manage a domain example.org.au then i need it to have two records. one called server which is the ip address of current machine the other called www where www.example.org.au will be pointed to another ip address. i cant seem to get my system to work. This is my db.example.org.au file example.org.au. IN SOA server.example.org.au. ( 1; 3; 1h; 1w; 1h ) ; ; ;Host addresses localhost.example.org.au IN A 127.0.0.1 www.example.org.au. IN A 192.168.1.200 ; another virtual machine server.example.org.au IN A 192.168.1.199 ; current virtual machine If possible Please correct my errors! thanks! Any good guides out there? Thanks in advance ! :)

    Read the article

  • error while resolving DNS requires

    - by user2803887
    I followed this document to configure master-slave powerdns servers... http://linuxmanage.com/master-slave-powerdns-managed-by-poweradmin.html Installation completed perfectly no errors even I feel DNS is trying to resolve some queries and parameter.. but while going through intodns.com i get below error for domain names which i have created in powerdns name server installed as above guide. Error Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records. Error Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me. Error Missing nameservers You should already know that your NS records at your reported by your nameservers are missing, so here it is again: nameservers ns1.makeittiny.com. ns2.makeittiny.com. I am much new to powerdns so not able to figure out where problem.. i check all things but not able to make out where problem remains.

    Read the article

  • Why can I not edit, delete directories inside of this directory

    - by user43053
    Hello there, First, I thought this was PHP related, but maybe it isn't. My original post, which may be irrelevant now is located at the bottom. The problem is I have a directory : /articles/. In it are 10 sub directories. I have been changing the permissions lately, but now it seems all the permissions of the parent folder, sub-folders and files are either chmod 755 or 777. I cannot move, delete or edit files inside of this parent directory or sub-directories with my FTP-client. I can however edit, delete, create new files and directories and change them with PHP-functions without problems. What may the problem be? OLD POST. Ignore everything below this line: If I create a directory with mkdir(), or create a file with fopen(), file_put_contents() or SimpleXMLElement::asXML(), I am unable to access the file with my FTP-client or c-Panel File Manager. If I try to delete or edit them, I get errors. Dreamweaver suggests it is a permission problem or a network or filesystem fault (but I've set the permissions with chmod() to 0777, and when I check the cPanel, it confirms chmod 777. I also tried to use fileowner() and the function returns int(99), the same owner as those files that I could access with my FTP-client. It seems files and directories created with PHP can only be modified or be deleted with PHP. I thought this must be a server setup related issue, so I write it here. I am on a shared server, and I have no idea about setting up servers. EDIT: It seems the problem is different. I cannot move files with FTP-client to the parent, or sub-directories either. This problem may not be PHP related, then. It seems the problem applies to any directory, regardless of whether it was created by PHP. EDIT 2: The parent directory has chmod 755. Thank you for your time. Kind regards Marius

    Read the article

  • How to fix Windows 7 when System Recovery Options hangs?

    - by seansand
    The battery power ran out on my HP G60 laptop and it shut down. Even after recharging, Windows 7 will now not start up. After any attempted startup, it bluescreens and takes me to the "Startup Repair (recommended)" / "Start Windows Normally" console screen. "Startup Repair (recommended)" appears to be the right choice, but when I choose it, I get taken to a screen which appears to be System Recovery Options (it's the same wallpaper as the screenshots here: http://www.sevenforums.com/tutorials/668-system-recovery-options.html). However, I just get a cursor with nothing else; no "System Recovery Options" window ever pops up. (A black console screen does pop up for a split-second but too fast to be able to read the text.) The empty screen with cursor hangs indefinitely. System Recovery Options normally runs off of a partition on the laptop hard drive. When I got the laptop, I also created a System Repair Disc (in fact I have more than one) and when I try use any of them; they all result in the same wallpaper and empty screen with lone cursor. Ctrl-Alt-Del does nothing. The computer did not come with a Windows 7 installation disc, so there's no obvious way to reinstall Windows 7. Safe mode does not work; startup fails and I just get sent back to the "Startup Repair (recommended)"/"Start Windows Normally" console screen. "Start in last good state" does not work either, same result as above. Running a memory & hard disk check found no errors. Do I have any options at all? "System Recovery Options" seems to be what I want, but the screen that is supposed to take me to them just hangs.

    Read the article

  • Booting Windows from different partition than system

    - by szamil
    I have bought an SSD disk, but my laptop (Dell Precision M6300) refuse to use it as a target disk for windows (AHCPI on/off, BIOS up-to-date). I can't exchange the disk unfortunately... But fortunately, I've managed to install windows using USB disk case. The problem is, that when I put that disk as my internal drive it can't boot. (Disk read error, Three Finger Salute ... ) So I tried with Linux (openSUSE), I manage to install it as well, but when I tried to boot GRUB from internal drive I get errors again. (Should I try GRUB2?) I figured out that I can boot into that internal hard drive's openSUSE system using small USB drive with GRUB, kernel and image on it. So, I just run GRUB from USB drive, it loads necessary stuff from the USB drive and then continues from the internal drive. I want to do the same with Windows. But GRUB (rootnoverify and chainloader +1) does not boot my windows on internal drive. The question: is there any chance to copy the critical windows' boot files into the USB drive, to make it possible to boot from that USB drive, but continue booting from internal (different in general) drive? The USB drive would became a system hardware key! ;-) Disk: Plextor M5S 128GB Sata III, laptop has Sata II, but it's compatible anyway, right?

    Read the article

  • Confused with creating an ODBC connection, apparently I have two separate odbcad32.exe files?

    - by Hoser
    Alright, this is my first time working with this so forgive me if I'm a little confusing or vague. I have a server with Windows Server 2008 Standard without Hyper-v (6.0, Build 6002). I'm running a small website off this server and using a Microsoft Access database to store some information coming in through the website. I'm sure the PHP I have written to open the ODBC connection is correct as it has worked for me when I created this website in a testing environment on a laptop. My current issue now is that it seems like I have two different odbcad32.exe's, and one doesn't appear to have a driver for a .accdb file, and only a .mdb file. The other has a driver for both. The first one I speak of has a driver titled 'Driver do Microsoft Access (.mdb)', the second one has a driver titled 'Microsoft Access Driver (.mdb, .accdb)'. I access the first odbcad32.exe by going to C:\Windows\SysWOW64\odbcad32.exe, and then the one that seems to have the driver I need I go to Control Panel-Administrative Tools-Data Sources(ODBC) and simply create a new connection in the System DNS tab. Whenever I make changes to the one that I access through the Control Panel, I see no changes, however if I use the odbcad32.exe file in SysWOW64 I do get some changes in the errors that come back to me. The main difference I noticed is that when I set up an ODBC connection with the Control Panel method it said it simply couldn't find the ODBC connection, but when I made a .mdb connection in the SysWOW64 one (and pointed it to a .accdb file) it says Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt. Which makes it seem like it is this odbcad32.exe version in SySWOW64 that is being recognized as the 'correct' one. Is there any way to fix this? I've tried to be as thorough as possible but if I've been confusing or left anything out let me know.

    Read the article

  • Accessing or Resetting Permissions of a Mounted Registry Hive of a Different User / From a Different System

    - by Synetech
    I’m currently stuck using my backup system until I can replace my dead motherboard. In the meantime, I have put my hard-drive in this system so that I can access my files and keep working on the backup system. Fortunately, I don’t have a permission issues with the files (the partitions are FAT32). The issue I’m having is with the registry. I need to import some of my settings from the hives of my (old? normal?) installation of Windows into the one I’m currently using. Settings from the system hives (SYSTEM, SOFTWARE, etc.) are fine, but the user hive is giving me trouble. I’ve copied the NTUSER.DAT file from my other drive and mounted it with the reg command. Most of the keys (eg Software) are fine and I can access them without problem, but some of them (particularly the Identities key where Outlook Express settings are stored) complains that it cannot be opened. If I open the permissions dialog, I get an error about being unable to view the current permssions. If I then ignore it and try to take ownership of the key and it’s subkeys, I get an access-denied error. If I then add permissions for my user account on this system, I get an error, however I am then able to see the subkeys and values of the key. If I then try to access the subkeys, I get the same original errors. If I repeat the process for each subkey, I can see their values and subkeys, and so on, but of course this gets to be incredibly annoying and time-consuming (especially since the Identities key has a lot of subkeys). Is there an easier/temporary/more correct way to dump a key so that I can import it into my backup system?

    Read the article

  • Non-restored Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >