Search Results

Search found 63307 results on 2533 pages for 'windows live mesh'.

Page 123/2533 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • break Folder Protection, Folder Guard Lock or Folder in Windows XP?

    - by SonyAdi
    when I'm making a new partition by the partition magic. Then all of a sudden power failure. Unfortunately because my computer is not equipped with UPS (Power supply Uniterruptible), my computer finally died, too. When power is restored, I tried to turn on the computer. Suddenly my computer can not boot normally into windows. Option through safemode and others all I've tried. The result fails, can not boot at all, into safe mode also can not. And I know the cause. Partition Magic did not finish the work and stopped in the middle of the road and cause the transfer of data files or stopped, finally file2 any default windows were destroyed as well. Unfortunately my important data I store in my document. Finally, I take my hard drive to a friend. Hopes to open a computer hard drive through friend, at least I could save my important data, and then I can install window again by reformatting my hard drive is first. I read the hard drive in explorer my friend, complete with their data, but the data of my important data in my document can not get to go because it requires administrator privileges or the original user's default start my windows (my computer) to open my document folder tersebut.Ini actually very similar to the work or Folder Protection Folder Guard. result I was disappointed and almost desperate to get back my important data is. how do i break Folder Protection, Folder Guard Lock or Folder in Windows XP?

    Read the article

  • How do I install gfortran (via cygwin and etexteditor) and enable ifort under Windows XP?

    - by bez
    I'm a newbie in the Unix world so all this is a little confusing to me. I'm having trouble compiling some Fortran files under Cygwin on Windows XP. Here's what I've done so far: Installed the e text editor. Installed Cygwin via the "automatic" option inside e text editor. I need to compile some Fortran files so via the "manage bundles" option I installed the Fortran bundle as well. However, when I select "compile single file" I get an error saying gfortran was missing, and then that I need to set the TM_FORTRAN variable to the full path of my compiler. I tried opening a Cygwin bash shell at the path mentioned (.../bin/gfortran), but the compiler was nowhere to be found. Can someone tell me how to install this from the Cygwin command line? Where do I need to update the TM_FORTRAN variable for the bundle to work? Also, how do I change the bundle "compile" option to work with ifort (my native compiler) on Windows? I've read the bundle file, but it is totally incomprehensible to me. Ifort is a Windows compiler, invoked simply by ifort filename.f90, since it is on the Windows path. I know this is a lot to ask of a first time user here, but I really would appreciate any time you can spare to help.

    Read the article

  • Taking screenshots in Windows Vista, Windows 7, with transparent areas outside the app region

    - by Steve Sheldon
    Hey Folks, I am trying to take a screenshot of an application and I would like to make the parts of the rectangle that are not part of the applications region be transparent. So for instance on a standard windows application I would like to make the rounded corners transparent. I wrote a quick test application which works on on XP (or vista/windows 7 with aero turned off): protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); Graphics g = e.Graphics; // Just find a window to test with IntPtr hwnd = FindWindowByCaption(IntPtr.Zero, "Calculator"); WINDOWINFO info = new WINDOWINFO(); info.cbSize = (uint)Marshal.SizeOf(info); GetWindowInfo(hwnd, ref info); Rectangle r = Rectangle.FromLTRB(info.rcWindow.Left, info.rcWindow.Top, info.rcWindow.Right, info.rcWindow.Bottom); IntPtr hrgn = CreateRectRgn(info.rcWindow.Left, info.rcWindow.Top, info.rcWindow.Right, info.rcWindow.Bottom); GetWindowRgn(hwnd, hrgn); // fill a rectangle which would be where I would probably // write some mask color g.FillRectangle(Brushes.Red, r); // fill the region over the top, all I am trying to do here // is show the contrast between the applications region and // the rectangle that the region would be placed in Region region = Region.FromHrgn(hrgn); region.Translate(info.rcWindow.Left, info.rcWindow.Top); g.FillRegion(Brushes.Blue, region); } Quick side note: before commenting that this code is doing stuff it shouldn't (or should do, like dispose) in a paint function, I know, it's not going anywhere but this post and is designed purely as a way to quickly show the problem, and that it does... OK, back to the problem ;) When I run this test app on XP (or Vista/Windows 7 with Aero off), I get something like this, which is great because I can eek an xor mask out of this that can be used later with BitBlt. Here is the problem, on Vista or Windows 7 with Aero enabled, there isn't necessarily a region on the window, in fact in most cases there isn't. Can anybody help me figure out how to get the region of the application like this on these platforms. Here are some of the approaches I have already tried... 1. Using the PrintWindow function: This doesn't work because it gives back a screenshot taken of the window with Aero off and this window is a different shape from the window returned with Aero on 2 Using the Desktop Window Manager API to get a full size thumbnail: This didn't work because it draws directly to the screen and from what I can tell you can't get a screenshot directly out of this api. Yeah, I could open a window with a pink background, show the thumbnail, take a screenshot then hide this temporary window but thats a horrible user experience and a complete hack I would rather not have my name on. 3. Using Graphics.CopyFromScreen or some other pinvoke variant of this: This doesn't work because I can't assume that the window I need information from is at the top of the z-order on the screen. Right now, the best solution I can think of is to special case Aero on Windows 7 and Vista to manually rub out the corners by hard coding some graphics paths I paint out but this solution would suck since any application that performs custom skinning will break this. Can you think of another or better solution? If you are here, thanks for taking time to read this post, I appreciate any help or direction that you can offer!

    Read the article

  • Installed Ubuntu 12.04.01 with Windows XP but lost access to Windows XP

    - by Bob D
    The First time I tried to install Ubuntu the installer installed it on my D drive. This resulted in only booting to Windows XP with no access to Ubuntu. I had to download a disk partitioning program to undo all of this. A tip from the Internet said to create a partition on the C drive for Ubuntu, so I did along with a Swap Partition. I did this manually because the installer on the CD would not do so and would not let me do so from within the installer program. With the fresh partitions created for Ubuntu I let the installer do its thing. The computer rebooted and came up in Ubuntu. I then installed WINE and all was well. Then I shut the computer down for the night. The next day I turned on the computer and it booted directly into Ubuntu. I can see the Windows partition and all the files but it will not allow me to switch to the Windows XP OS. Does not even give me a choice to do so. I have reinstalled Ubuntu several times and each time is the same, I cannot access Windows XP anymore. Right now I am in a fresh install with only whatever the installer installed. How do I fix this?! I have tried the hold the shift key to see if something called GRUB shows up, but no. I tried shifting the order of boot in GRUB but that did not work either. I tried using EasyBCD but that will not run. One symptom I do not understand, my monitor will post a graphic when the computer reboots that the cable is disconnected, this is normal. Then when the computer gets to the actual boot process it will display the splash screens etc and it did this for Windows XP as well. But now something new has popped up, while booting Ubuntu after where it probably should be showing me a menu to pick what OS I want to boot, the monitor posts "Input Unsupported" until Ubuntu loads. I have never seen it post this before, maybe a clue to someone.

    Read the article

  • Windows Azure Use Case: Web Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many applications have a requirement to be located outside of the organization’s internal infrastructure control. For instance, the company website for a brick-and-mortar retail company may want to post not only static but interactive content to be available to their external customers, and not want the customers to have access inside the organization’s firewall. There are also cases of pure web applications used for a great many of the internal functions of the business. This allows for remote workers, shared customer/employee workloads and data and other advantages. Some firms choose to host these web servers internally, others choose to contract out the infrastructure to an “ASP” (Application Service Provider) or an Infrastructure as a Service (IaaS) company. In any case, the design of these applications often resembles the following: In this design, a server (or perhaps more than one) hosts the presentation function (http or https) access to the application, and this same system may hold the computational aspects of the program. Authorization and Access is controlled programmatically, or is more open if this is a customer-facing application. Storage is either placed on the same or other servers, hosted within an RDBMS or NoSQL database, or a combination of the options, all coded into the application. High-Availability within this scenario is often the responsibility of the architects of the application, and by purchasing more hosting resources which must be built, licensed and configured, and manually added as demand requires, although some IaaS providers have a partially automatic method to add nodes for scale-out, if the architecture of the application supports it. Disaster Recovery is the responsibility of the system architect as well. Implementation: In a Windows Azure Platform as a Service (PaaS) environment, many of these architectural considerations are designed into the system. The Azure “Fabric” (not to be confused with the Azure implementation of Application Fabric - more on that in a moment) is designed to provide scalability. Compute resources can be added and removed programmatically based on any number of factors. Balancers at the request-level of the Fabric automatically route http and https requests. The fabric also provides High-Availability for storage and other components. Disaster recovery is a shared responsibility between the facilities (which have the ability to restore in case of catastrophic failure) and your code, which should build in recovery. In a Windows Azure-based web application, you have the ability to separate out the various functions and components. Presentation can be coded for multiple platforms like smart phones, tablets and PC’s, while the computation can be a single entity shared between them. This makes the applications more resilient and more object-oriented, and lends itself to a SOA or Distributed Computing architecture. It is true that you could code up a similar set of functionality in a traditional web-farm, but the difference here is that the components are built into the very design of the architecture. The API’s and DLL’s you call in a Windows Azure code base contains components as first-class citizens. For instance, if you need storage, it is simply called within the application as an object.  Computation has multiple options and the ability to scale linearly. You also gain another component that you would either have to write or bolt-in to a typical web-farm: the Application Fabric. This Windows Azure component provides communication between applications or even to on-premise systems. It provides authorization in either person-based or claims-based perspectives. SQL Azure provides relational storage as another option, and can also be used or accessed from on-premise systems. It should be noted that you can use all or some of these components individually. Resources: Design Strategies for Scalable Active Server Applications - http://msdn.microsoft.com/en-us/library/ms972349.aspx  Physical Tiers and Deployment  - http://msdn.microsoft.com/en-us/library/ee658120.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Dual boot Ubuntu and Windows 7, I Can only boot ubuntu through recovery mode

    - by Alec
    I want to become a new user of Ubuntu, however this problem is preventing me. I have/had Window 7 professional on my computer. I recently looked into getting linux. I discovered dual-booting and decided to give it a try. First I created a bootable flash drive with ubuntu 12.10 64 bit. I then followed the instructions on: https://help.ubuntu.com/community/WindowsDualBoot after I finished going through the setup, my computer rebooted. After the reboot I was able to select Ubuntu, advanced options for Ubuntu, 2 memory tests, and windows 7 (loader). So I chose Windows ( honestly i was more concerned that i still had everything on windows at this point). I then rebooted again and selected Ubuntu. When i selected Ubuntu, the background screen of Grub (the crimson/burgandy color) stayed for a few seconds then the screen went black: video here http://www.youtube.com/watch?v=6kKcG4sT7Lg&feature=plcp I tried again with the same results. so i redid the ubuntu install differently using http://www.liberiangeek.net/2012/10/dual-booting-windows-7-and-ubuntu-12-10-quantal-quetzal/. After rebooting the same thing happened. After that i was stumped, so i figured it could hurt to experiment. after all i backed up my windows 7 stuff, and i have the software disk. I tried booting in recovery mode under "advanced options for Ubuntu" and sure enough, after selecting continue to normal reboot it worked. So i updated and everything but when i rebooted it still wouldn't boot under Ubuntu. It would always boot after recovery mode. So i try installing 12.10 32 bit Ubuntu. the same problem keeps happening. I can still get to Ubuntu through recovery mode. so i went online and tried using the terminal (in ubuntu that i booted through recovery mode) when i was using it i discovered that "Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected" kept showing up. also i noticed a notification in the top right corner that looked like a do not enter sign. it said "an error occured, please run package manager from the right-click menu or apt-get in a terminal to see what is wrong. the error message was: 'ror in sitecustomize;set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected traceback (most recent called last): File "/usr/bin/lsb_release EOFError: EOF read where not expected 39;0' this usually means that your installed packages have unmet dependencies" Naturally i assumed this was what was causing my boot problems. I downloaded synaptic and updated everything and the error went away. but my boot problem was still a problem. so i go online find some things that have worked for others, like this Try to do this (in your terminal: sudo nano /etc/default/grub Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it too : GRUB_CMDLINE_LINUX_DEFAULT="quiet" And update Grub: sudo update-grub This should fix stuff.) I did this and i still have the problem. sorry for the excessive explanation, please help.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Set up basic Windows Authentication to connect to SQL Server 2008 from a small, trusted network

    - by Margaret
    I'm guessing that this is documented somewhere on Microsoft's site, but thus far I haven't found it. I'm trying to set up a Windows Server 2008 box to have SQL Server 2008 with Windows Authentication (Mixed Mode, actually, but anyway) for work. We have a number of client machines that will need access to the databases, and I would like to keep configuration as simple as feasible. Here's what I've done so far: Install SQL Server 2008 selecting Mixed Mode Create a new 'Standard' (rather than Administrator) Windows login entitled "UserLogin" (with intent to use it as the access account) Create an SQL Server Login for Server\UserLogin and assign it 'Windows Authentication' Log in as UserLogin, check that I'm able to connect to SQL Server using WIndows Authentication, then log out again Start on the first client (Windows XPSP2, SQL Server 2005): Run C:\WINDOWS\system32\rundll32.exe keymgr.dll, KRShowKeyMgr Click "Add", enter the server name in the box, Server\UserLogin in the Username, and UserLogin's password in the Password field. Click "Ok" then "Close" Attempt to access SQL Server 2005 using Windows authentication. Succeed. Confetti! Start on the second client (Windows 7, SQL Server 2008): Run C:\WINDOWS\system32\rundll32.exe keymgr.dll, KRShowKeyMgr Click "Add", enter the server name in the box, Server\UserLogin in the Username, and UserLogin's password in the Password field. Click "Ok" then "Close" Attempt to access SQL Server 2008 using Windows authentication. Receive an error "Login failed. The login is from an untrusted domain and cannot be used with Windows authentication" Assume that this translates to "You can't have two connections from the same account" (Yes, I know that doesn't make sense, but I'm a bit like that) Go back to the server, create a second Windows account, give it SQL Server rights. Go back to the second client, create a new passkey for the second login, try logging in again. Continue to receive the same error. Is this all overly complex and there's an easy way to do what I'm trying to accomplish? Or am I missing some ultra-obvious step that would make everything behave as desired? Most of the stuff that's coming up when I try to Google seems to be along the lines of "My ASP.NET application isn't working!", which obviously isn't all that much use.

    Read the article

  • Windows 7 The boot selection failed because a required device is inaccessible 0xc000000f

    - by piratejackus
    I have a problem with my Windows 7, hardware : Acer 3820TG Operating Systems : Windows 7 and Ubuntu 10.04 dual Case: When I try to boot my windows 7 I see an error: "Window failed to start. A recent hardware or software change might be the cause. To fix the problem: 1.Insert.... 2. .... ... status : 0xc000000f info : The boot selection failed because a required device is inaccessible .... " I can't exactly remember what were my last actions on Windows. I already searched this error and applied the proposed solutions, I created a repair USB (because I don't have a CD-ROM nor a Windows 7 CD) such as; -repair operating system :it says it cannot repair it -checking disk (chkdsk D: /f /r) : it checks the disk without a problem or error and it takes pretty long (more than a hour). But when I restart, still the same error. -I didn't create a restore point so I pass this option -I don't have a system image -I tried to run windows recovery (I have a recovery partition) but there are just two options: 1- Format the operating system but retain user data (copies the files under users to c\backup folder, but when I searched deeper I found that there are some people who already tried this option and couldn't find their user files under backup directory). Plus, I have unfortunately just one partition D (it is a fault I know) because I use always Ubuntu. So this is not applicable in my situation 2- Format entire system (Windows). I keep my valuable data in windows but not in user folder. I was reaching them from Windows. -I tried to repair windows boot by: bootrec /fixMBR bootrec /fixBoot bootrec /rebuildBCD I lost all grub menu, and reinstalled it. - ubuntuforums.org/showthread.php?t=1014708&page=29 nothing changed, same error. I created a thread in microsoft forums - http://social.answers.microsoft.com/Forums/en-US/w7install/thread/69517faf-850a-45fd- 8195-6d4ed831f805 but I couldn't find a solution. Before I run chkdsk from usb repair disk I couldn't able to mount Windows (NTFS) partition from Ubuntu, I was getting "couldn't mount file system, error code 2". I tried to fix ntfs partition from ubuntu and got "segmentation fault". I also created a thread on ubuntuforums for this mount problem: - http://ubuntuforums.org/showthread.php?t=1606427 So, after chkdsk, I could enable to mount windows partition but all I see in this partition is chkdsk logs, no any other data. Now, I don't think I lost my data because I don't get any filesystem errors, just the boot section, but this log files under windows partition makes me afraid. I see that Microsoft developers don't have a solution yet for this error. If you need any information to get more idea I can give, maybe I miss some points or it could be complicated. Thanks in advance.

    Read the article

  • Old CRT screen's high resolution doesn't work anymore on windows 7

    - by Mixxiphoid
    One year ago I decided to switch from Windows XP to Windows 7. I had a 17" CRT monitor with a screen resolution of 1600x1200 which worked fine on Windows XP. While installing Windows 7 everything went well until Windows 7 was going to install the video card its driver. Windows 7 puts the screen to its recommended resolution and my screen became black. I waited a few minutes to be sure the installation was finished. I turned off the computer by hand and restarted the computer on a resolution of 800x640. When windows 7 was done installing I went to screen resolutions and the resolution of 1600x1200 was on the top of the list with '(recommended)' next to it. I tried putting it on 1600x1200 but again my screen went black. I installed all windows 7 updates including the video card driver from the NVidia site (NOT from Windows 7). I tried about everything to make it work on 1600x1200 but with no succes. The highest resolution I got with the crt monitor was 1280x1024. I had a TFT screen which had 1280x1024 as max resolution and had better colors, so I used that one till today. My video card is 9600GT and my power supply is beyond sufficient. I even tried to install the driver I had on XP to see if it worked, but no results. I tried classic mode on windows 7, changed the dpi, the frequentie and the monitor settings, but nothing worked. I really like a vertical resolution of 1200, but it seems today I'm bound to all those standard monitors with a resolution of 1980x1024... Can anybody explain to me what the cause is that it worked on Windows XP but not on Windows 7? And maybe a solution to the problem (I actually gave up on getting it fixed...) Thanks a lot in advance. SOLUTION I downloaded the according monitor driver and installed it. Next I rebooted my computer on low resolution (800x640) and connected the CRT monitor. When Windows 7 booted successfully I went to computer management and 'update the driver' of my monitor. I manually selected 'Generic PnP monitor' and made that one active. I want to advanced settings at 'Screen resolution' and selected the mode '1600x1200 (32-bit) 80 Hertz (95 Hertz did not work). Now I had my resolution on 1600x1200. I repeated the earlier step to select the original monitor again instead of the Generic monitor. Quite a way to solve this problem, but it worked! Thanks a lot you all.

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • Why do messenger keeps coming up when I adjust screen brightness?

    - by Asaf R
    My Dell Studio XPS 13 has a brightness up & down buttons (Fn + up arrow / down arrow), as do most laptops have. It's running Windows 7 64bit and I've installed Windows Live Messenger (build 14 and something) on it. When I hit Fn + down arrow, to reduce screen brightness, the Messenger window pops up. That's true whether it has been running in background (system tray / minimized) before or not. Note, to make messenger hide itself to the system tray, I've set it to Vista Compatibility mode. I don't think that has anything to do with it though, since if I shut compatibility mode off, the problem persists. There's a thread on the matter here, but with no solution. Thanks, Asaf SOLUTION: Stop Intellitype when using internal keyboard. See detailed howto here. EDIT: Some new findings on the matter - If I exit messenger entirely and restart (or simply stop) the service "Windows Live ID Sign-in Assistant" the problem is solved until I run messenger again. It persists even after entirely closing messenger, until I restart the said service. EDIT2: The above service "Windows Live ID Sign-in Assistant" actual name is wlidsvc. There's no shortcut defined for messenger, nor is there any other hot key in its preferences. EDIT3: I'm not sure if it's relates, but some of the Fn keys are not working - Fn + F1 doesn't put the machine to sleep, Fn + F3 doesn't show battery status. EDIT4: Problem probably relates to conflict with IntelliType Pro. See my answer below. Said conflict also causes Undo command when disabling Wireless (Wireless touch button), and other side effects for various keys.

    Read the article

  • Can not enable Windows SmartScreen. Says: "this setting is managed by your system administrator"

    - by Afshin Gh
    I can not enable my Windows SmartScreen on Windows 8.1 My PC is not joined to any domain. I'm not talking about SmartScreen feature available in IE but the feature that is available in File Explorer. Control Panel Action Center Change Windows SmartScreen settings I searched in group policy but couldn't find anything that is preventing me from enabling it. Update 1: My user is a member of administrators group. Other things work fine. When I try to change something that needs administrative permission, UAC window appears, but nothing here.

    Read the article

  • Fixing windows MBR without Vista Recovery CD

    - by Aditya Sehgal
    I had Windows Vista + Ubuntu running on my system. I deleted the ubuntu partitions from Windows. However, when I start the system, GRUB throws up an Error 22 (missing partition) and does not let me boot into Windows. The CD ROM on my laptop is fried and therefore I tried installing Ubuntu again using a USB install. However, the version Ubuntu 9.10 justs hangs in the load screen and does nothing. I do not have windows Vista Recovery CD (as it was a recovery partition in my laptop). What are the options I have? How do I fix this?

    Read the article

  • Metro apps crash on startup, driver or permissions issue?

    - by Vee
    After installing Win8 x64 RC, Metro apps worked correctly, but desktop OpenGL apps were slow and unresponsive. I installed the latest Win8 nVidia drivers, and the OpenGL apps started working correctly. At the same time, because of annoying permission messages, I changed the C:\ drive and all its files ownerships to my user, and gave it full permission. I restarted my pc after installing the drivers, and now Metro apps only show the splash screen, then crash. I tried installing other versions of the nVidia drivers, with the same result. My GPU is a GeForce GTX275. Is this a known problem with nVidia drivers? Or maybe changing the ownership of C:\ is the real problem? Thank you. More information (after looking in the event viewer) I've managed to find the problem and the error in the Event Viewer. I still cannot solve it. Here's the information I found by opening the Mail app and letting it crash: Log Name: Microsoft-Windows-TWinUI/Operational Source: Microsoft-Windows-Immersive-Shell Date: 07/06/2012 15.54.17 Event ID: 5961 Task Category: (5961) Level: Error Keywords: User: VEE-PC\Vittorio Computer: vee-pc Description: Activation of the app microsoft.windowscommunicationsapps_8wekyb3d8bbwe!Microsoft.WindowsLive.Mail for the Windows.Launch contract failed with error: The app didn't start.. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Immersive-Shell" Guid="{315A8872-923E-4EA2-9889-33CD4754BF64}" /> <EventID>5961</EventID> <Version>0</Version> <Level>2</Level> <Task>5961</Task> <Opcode>0</Opcode> <Keywords>0x4000000000000000</Keywords> <TimeCreated SystemTime="2012-06-07T13:54:17.472416600Z" /> <EventRecordID>6524</EventRecordID> <Correlation /> <Execution ProcessID="3008" ThreadID="6756" /> <Channel>Microsoft-Windows-TWinUI/Operational</Channel> <Computer>vee-pc</Computer> <Security UserID="S-1-5-21-2753614643-3522538917-4071044258-1001" /> </System> <EventData> <Data Name="AppId">microsoft.windowscommunicationsapps_8wekyb3d8bbwe!Microsoft.WindowsLive.Mail</Data> <Data Name="ContractId">Windows.Launch</Data> <Data Name="ErrorCode">-2144927141</Data> </EventData> </Event> Found other stuff, this is another error that appears when opening a Metro app: Log Name: Application Source: ESENT Date: 07/06/2012 16.01.00 Event ID: 490 Task Category: General Level: Error Keywords: Classic User: N/A Computer: vee-pc Description: svchost (1376) SRUJet: An attempt to open the file "C:\Windows\system32\SRU\SRU.log" for read / write access failed with system error 5 (0x00000005): "Access is denied. ". The open file operation will fail with error -1032 (0xfffffbf8). Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="ESENT" /> <EventID Qualifiers="0">490</EventID> <Level>2</Level> <Task>1</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-06-07T14:01:00.000000000Z" /> <EventRecordID>11854</EventRecordID> <Channel>Application</Channel> <Computer>vee-pc</Computer> <Security /> </System> <EventData> <Data>svchost</Data> <Data>1376</Data> <Data>SRUJet: </Data> <Data>C:\Windows\system32\SRU\SRU.log</Data> <Data>-1032 (0xfffffbf8)</Data> <Data>5 (0x00000005)</Data> <Data>Access is denied. </Data> </EventData> </Event> After changing permissions again (adding Everyone and Creator Owner to System32), the "access denied to sru.log" error disappears, but this one appears in its place: Log Name: Application Source: Microsoft-Windows-Immersive-Shell Date: 07/06/2012 16.16.34 Event ID: 2486 Task Category: (2414) Level: Error Keywords: (64),Process Lifetime Manager User: VEE-PC\Vittorio Computer: vee-pc Description: App microsoft.windowscommunicationsapps_8wekyb3d8bbwe!Microsoft.WindowsLive.Mail did not launch within its allotted time. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Immersive-Shell" Guid="{315A8872-923E-4EA2-9889-33CD4754BF64}" /> <EventID>2486</EventID> <Version>0</Version> <Level>2</Level> <Task>2414</Task> <Opcode>0</Opcode> <Keywords>0x2000000000000042</Keywords> <TimeCreated SystemTime="2012-06-07T14:16:34.616499600Z" /> <EventRecordID>11916</EventRecordID> <Correlation /> <Execution ProcessID="3008" ThreadID="6996" /> <Channel>Application</Channel> <Computer>vee-pc</Computer> <Security UserID="S-1-5-21-2753614643-3522538917-4071044258-1001" /> </System> <EventData> <Data Name="ApplicationId">microsoft.windowscommunicationsapps_8wekyb3d8bbwe!Microsoft.WindowsLive.Mail</Data> </EventData> </Event> Now I'm stuck. It tells me "Activation of app microsoft.windowscommunicationsapps_8wekyb3d8bbwe!Microsoft.WindowsLive.Mail failed with error: The app didn't start. See the Microsoft-Windows-TWinUI/Operational log for additional information." but I can't find the Microsoft-Windows-TWinUI/Operational log. I'm starting a bounty. I found the TWinUI/Operational log. It only tells me: Log Name: Microsoft-Windows-TWinUI/Operational Source: Microsoft-Windows-Immersive-Shell Date: 07/06/2012 16.28.57 Event ID: 5961 Task Category: (5961) Level: Error Keywords: User: VEE-PC\Vittorio Computer: vee-pc Description: Activation of the app microsoft.windowscommunicationsapps_8wekyb3d8bbwe!Microsoft.WindowsLive.Mail for the Windows.BackgroundTasks contract failed with error: The app didn't start.. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Immersive-Shell" Guid="{315A8872-923E-4EA2-9889-33CD4754BF64}" /> <EventID>5961</EventID> <Version>0</Version> <Level>2</Level> <Task>5961</Task> <Opcode>0</Opcode> <Keywords>0x4000000000000000</Keywords> <TimeCreated SystemTime="2012-06-07T14:28:57.238140800Z" /> <EventRecordID>6536</EventRecordID> <Correlation /> <Execution ProcessID="3008" ThreadID="2624" /> <Channel>Microsoft-Windows-TWinUI/Operational</Channel> <Computer>vee-pc</Computer> <Security UserID="S-1-5-21-2753614643-3522538917-4071044258-1001" /> </System> <EventData> <Data Name="AppId">microsoft.windowscommunicationsapps_8wekyb3d8bbwe!Microsoft.WindowsLive.Mail</Data> <Data Name="ContractId">Windows.BackgroundTasks</Data> <Data Name="ErrorCode">-2144927141</Data> </EventData> </Event> I need to go deeper. I found a forum thread that told me to look for "DCOM" errors. I found this one related to the app crash "The server Microsoft.WindowsLive.Mail.wwa did not register with DCOM within the required timeout." Log Name: System Source: Microsoft-Windows-DistributedCOM Date: 07/06/2012 16.46.45 Event ID: 10010 Task Category: None Level: Error Keywords: Classic User: VEE-PC\Vittorio Computer: vee-pc Description: The server Microsoft.WindowsLive.Mail.wwa did not register with DCOM within the required timeout. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-DistributedCOM" Guid="{1B562E86-B7AA-4131-BADC-B6F3A001407E}" EventSourceName="DCOM" /> <EventID Qualifiers="0">10010</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2012-06-07T14:46:45.586943800Z" /> <EventRecordID>2763</EventRecordID> <Correlation /> <Execution ProcessID="804" ThreadID="2364" /> <Channel>System</Channel> <Computer>vee-pc</Computer> <Security UserID="S-1-5-21-2753614643-3522538917-4071044258-1001" /> </System> <EventData> <Data Name="param1">Microsoft.WindowsLive.Mail.wwa</Data> </EventData> </Event>

    Read the article

  • How can I fix a computer that blue screens before Windows starts?

    - by Julio
    I have a Dell Inspiron running Windows 7 that keeps getting blue screens of death. At first, the computer slowed down so much it was unusable. After some time, it wouldn't even load Windows anymore; the machine just started going straight to a BSOD after the starting Windows logo. I have tried loading the reset disc, but it just takes me to a loading lobby and never advances. I don't need the files on the computer, but I do need the machine for school. What can I do?

    Read the article

  • In Windows 7 Home Premium, is it possible to grant a user account the "log on as a service" right and if so, how?

    - by Ryan Johnson
    The title says it all. I need to have the ability for a local user account to log on as a service on a computer running Windows 7 Home Premium. In Windows 7 Ultimate, this is accomplished by going to Control Panel - Administrative Tools - Local Security Policy and adding the user to the "Log on as a service" policy. In Home Premium, there is no Local Security Policy in the Control Panel. Is there another way to add the use to that policy (i.e. registry setting) or is my only recourse to upgrade the computer to Windows 7 Professional? Thanks in Advance, Ryan

    Read the article

  • Windows 7 / 8 tool for amplifying sound / equivalent to pavucontrol?

    - by Quandary
    I'm watching live streamed videos, but the volume sometimes is barely loud enough. With Windows, I can only turn the system volume and the flash volume up to maximum, and then bad luck if that's still not loud enough. On Linux, on the same notebook, I have the same problem, but I can use pavucontrol to amplify the sound volume beyond 100%. Unfortunately, I can't always work on Linux, since Visual Studio and SQL-Server only run on Windows. Is there any tool or way to amplify volume on Windows 7 / 8 , without having to resort to run Linux in a virtual machine?

    Read the article

  • I want to change hard drive. How to move system partition with Windows 7?

    - by Semyon Perepelitsa
    I've bought a new hard drive and want to move all my data to it. I had no problem with moving all files on non-system partition. But I don't know how to move system partiton. Now I have 3 partitions on the new disk, fist two was created by Windows installation CD (I tried to move system using internal tools, but it didn't work for me), third is filled with my successfully transferred data from old disk. And there are two partitions on the old disk: the first one is system (Windows 7) and the second one is my old main storage, that I already moved to the new hard drive and now it is empty. How can I change the placement of Windows 7 with minimal difficulties and losses, so I could work on the new hard drive just as I did it on the old one?

    Read the article

  • Is the Realtek network driver on Windows Update fixed yet?

    - by Ian Boyd
    Months ago i was experiencing problems with my networking, and was hoping the updated Realtek RTL8168C(P)/8111C(P) Family PCI-E Gigabit Ethernet NIC (NDIS 6.20) driver available from Microsoft on Windows Update would fix the problems. Instead, after the reboot, the network device failed to start, and the driver had to be rolled back. I'm not the only person to get this problem, it even was dealt with by a superuser. A co-worker experienced the exact same problem, and we independently came up with the same solution: hide the updated driver in Windows Update. So I've continued to have my network troubles, and i still need an updated driver. Is the version of the Realtek driver on Windows Update fixed yet? i know Microsoft never pulled it down, but maybe it's been up-updated. i really don't want to find out by downloading it. Can someone else confirm that it's no longer broken - or can someone else be my guinea pig?

    Read the article

  • User Friendly port knocker (port knocking client) for Windows?

    - by Ekevoo
    It seems "It's me" is the most popular port knocking client for windows… Except… it sucks. It works for console-savvy users such as me, but, unsurprisingly, all my users hate console windows. I know better than to force it upon them. I would love to have a nice port knocker for Windows that would be windowed, have launchers, and be easily provisionable (i.e. I tell my user to paste some settings or import some file by double clicking it). To be honest, just not being console-based would be enough.

    Read the article

  • How to install on Windows Server 2008, an application that is not natively supported, but not use co

    - by t3
    I have an application to install on Windows Server 2008, which is not natively supported by the installer. The application installer permits installation on Windows Vista(or XP, or 7). I do not want to use compatibility mode; even if I did it doesn't work. So, how can I spoof the installer to believe it is Windows Vista(or XP, or 7) so that it can proceed with the installation? Notes: This is an experimental machine, so I am not concerned about stability of the host or the application.

    Read the article

  • Did a recent WinXP update break CD/DVD read speeds? SP2/SP3

    - by quack quixote
    I have two systems with fresh installations of Windows XP Pro SP3 (SP3 slipstreamed into the installer; fully updated after install). One's a refurbished 2.4GHz Pentium4 system; the other is a new 1.6GHz Atom330 build. Both have brand-new dual-layer CD/DVD burners (one's a LiteOn IDE, the other an LG SATA). Both take a really looooong time to read a single-layer DVD in Windows with Cygwin tools. Specifically, 40 minutes or more. I burn backup data to single-layer DVD+/-R and use MD5 hashes for data verification (made with the standard md5sum tool in Unix or Cygwin). The hashes are burned to disc with the data files, and I use this command to verify: $ cd /path/to/disc/mountpoint ; time md5sum -c < md5.txt Here's how long that takes to run on a full single-layer DVD+/-R disc: Old system (WinXP SP2, 1.8GHz Athlon 2500+, last summer): ~10 minutes Old system (Ubuntu 9.04, 1.8GHz Athlon 2500+): ~10 minutes Old system (Debian 5, dual 550MHz P3): ~10 minutes New Pentium4 system (running Ubuntu 9.04): ~5 minutes New Pentium4 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes New Atom330 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes Now the weird stuff: Old system (WinXP SP2, 1.8GHz Athlon 2500+, today): ~25 minutes New Pentium4 system (running WinXP SP3, read from Cygwin): ~40-50 minutes (?!!) New Atom330 system (running WinXP SP3, read from Cygwin): ~40 minutes (can do it in ~30 minutes ...if i have another program spin up the drive first) Since both systems will copy files in 6 minutes using Windows Explorer, I know it's not a hardware problem. Windows just never spins up the drive during the Cygwin read, so it stays super-slow the whole time. Other programs like EAC and DVD Decrypter seem to spin up the disc just fine during their processing. DMA is enabled on both systems. (Can confirm in Windows' Device Manager on the Atom330, not on the P4.) Nero's DriveSpeed tool doesn't seem to have any effect. Copy times are comparable from commandline with Windows' xcopy. Copying with Cygwin's cp looks more like the problem state -- it will spin up the drive for a short time, never reaches full speed, and lets it spin back down again for most of the copy. What I need is to get full read speeds from Cygwin. Is this a known issue with SP3 or some other recent Windows update? Any other ideas? Update: More testing; Windows will spin up the drive when data is copied with Windows tools, but not when read in place or copied with Cygwin tools. It doesn't make sense to me that Windows spins up the drive for copying, but not for other reads. Might be more of a Cygwin problem? Update 2: GUI activity is sluggish during the problem state -- during the Cygwin verifies, there's a slight but noticable delay when dragging windows or icons around on the desktop, switching windows, Alt-Tabbing through open applications, opening new windows, etc. It reminds me of the delay when opening a Windows Explorer window on My Computer just after inserting a DVD. I've tried updating Cygwin (from 1.5.x to 1.7.x), but no change in the problem behavior. I've also noticed this issue occurs on WinXP SP2, but it's not exactly the same -- some spin-up occurs, so the read happens in ~25-30 minutes instead of 40+. The SP2 system used to run the verifies in ~10 minutes, and when it first changed (not sure exactly when, maybe in late November or early December 2009) I thought it was dying hardware. This is why I suspect an official update of breaking this functionality; this has worked for years on that SP2 box.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >