Search Results

Search found 7722 results on 309 pages for 'pitfalls to avoid'.

Page 59/309 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Is there a pre-made Continuous Integration solution for .NET applications?

    - by Brett Rigby
    From my perspective, we're constructing our own 'flavour' of NAnt/Ivy/CruiseControl.Net in-house and can't help but get the feeling that other dev shops are doing exactly the same work, but then everybody is finding out the same problems and pitfalls with it. I'm not complaining about NAnt, Ivy or CruiseControl at all, as they've been brilliant in helping our team of developers become more sure of the quality of their code, but it just seems strange that these tools are very popular, yet we're all re-inventing the CI-wheel. Is there a pre-made solution for building .Net applications, using the tools mentioned above?

    Read the article

  • Common causes of slow performing jQuery and how to optimize the code?

    - by Polaris878
    Hello, This might be a bit of a vague or general question, but I figure it might be able to serve as a good resource for other jQuery-ers. I'm interested in common causes of slow running jQuery and how to optimize these cases. We have a good amount of jQuery/JavaScript performing actions on our page... and performance can really suffer with a large number off elements. What are some obvious performance pitfalls you know of with jQuery? What are some general optimizations a jQuery-er can do to squeeze every last bit of performance out of his/her scripts? One example: a developer may use a selector to access an element that is slower than some other way. Thanks

    Read the article

  • Produce a script to hit Google once a day and log our position in the results?

    - by hawbsl
    The need has arisen within our organisation to monitor (on a daily basis) where our site appears (both organic and PPC) on the page 1 of Google. Also where a key competitor appears. For certain key words. In the immediate short term a colleague is doing this by hitting Google manually and jotting down the results. Yep. It occurs to us we can write a script (e.g. using C#) to do this. I know Analytics will tell us an awful lot but it doesn't note the competitor's position, plus I don't think it has other data we want. Question is, is there an existing basic tool which does this (for free, I guess)? And if we write it ourselves, where to start and are there obvious pitfalls to avoid (for example can Google detect and block automated requests?)

    Read the article

  • Using Lambda Statements for Event Handlers

    - by lush
    I currently have a page which is declared as follows: public partial class MyPage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //snip MyButton.Click += (o, i) => { //snip } } } I've only recently moved to .NET 3.5 from 1.1, so I'm used to writing event handlers outside of the Page_Load. My question is; are there any performance drawbacks or pitfalls I should watch out for when using the lambda method for this? I prefer it, as it's certainly more concise, but I do not want to sacrifice performance to use it. Thanks.

    Read the article

  • How to find the entity with the greatest primary key?

    - by simpatico
    I've an entity LearningUnit that has an int primary key. Actually, it has nothing more. Entity Concept has the following relationship with it: @ManyToOne @Size(min=1,max=7) private LearningUnit learningUnit; In a constructor of Concept I need to retrieve the LearningUnit with the greatest primary key. If no LearningUnit exists yet I instantiate one. I then set this.learningUnit to the retrieved/instantied. Finally, I call the empty constructor of Concept in a try-catch block, to have the entitymanager do the cardinality check. If an exception is thrown (I expect one in the case that already another 7 Concepts are referring to the same LearningUnit. In that case, I case instantiate a new LearningUnit with a new greater primary key. Please, also point out, if any, clear pitfalls in my outlined algorithm above.

    Read the article

  • Is it safe to unset PHP super-globals if this behavior is documented?

    - by Stephen
    I'm building a PHP framework, and in it I have a request object that parses the url as well as the $_GET, $_POST and $_FILE superglobals. I want to encourage safe web habits, so I'm protecting the data against SQL injection, etc. In order to ensure users of this framework are accessing the safe, clean data through the request object, I plan to use unset($_GET, $_POST, $_REQUEST); after parsing those variables. I will document this in the method comments, and explain in the framework documentation that this is happening. My question is: Would this be desirable behavior? What are the potential pitfalls that I have not foreseen?

    Read the article

  • How many variables is to many when storing in _SESSION?

    - by steve
    Hi - I'm looking for an idea of best practices here. I have a web based application that has a number of hooks into other systems. Let's say 5, and each of these 5 systems has a number of flags to determine different settings the user has selected in said systems, lets say 5 settings per system (so 5*5). I am storing the status of these settings in the user sesion variables and was wondering is that a sufficient way of doing it? I'm learning php as I go along so not sure about any pitfalls that this could run me into!

    Read the article

  • Partially parse C++ for a domain-specific language

    - by PierreBdR
    I would like to create a domain specific language as an augmented-C++ language. I will need mostly two types of contructs: Top-level constructs for specialized types or declarations In-code constructs, i.e. to add primitives to make functions calls or idiom easier The language will be used for scientific computing purposes, and will ultimately be translated into plain C++. C++ has been chosen as it seems to offer a good compromise between: ease of use, efficiency and availability of a wide range of libraries. A previous attempt using flex and bison failed due to the complexity of the C++ syntax. The existing parser can still fail on some constructs. So we want to start over, but on better bases. Do you know about similar projects? And if you attempted to do so, what tools would you use? What would be the main pitfalls? Would you have recommendations in term of syntax?

    Read the article

  • Pass structured data from C++ app to ASP.NET web service.

    - by Odrade
    I have Visual C++ application that needs to communicate with a ASP.NET web service. Specifically, the app needs to pass structured data (e.g. objects that contain lists of structures, etc) as a parameter to one of the service methods. The C++ application is already generating an xml document that contains this data. The document is generating using an xml library, so it should always well-formed. What is a good method for passing this data to the web service? I'm thinking about passing the document to the web service as a string parameter, then deserializing to a .NET object based on an xsd. But, I hear that passing an xml doc as a string parameter is not recommended. So, my questions: What are the pitfalls associated with sending the document as a string parameter, assuming that the document itself is always well-formed? Assuming the above is a bad idea, what are some good alternate approaches?

    Read the article

  • How to make an mutable C array for this data type?

    - by mystify
    There's this instance variable in my objective-c class: ALuint source; I need to have an mutable array of OpenAL Sources, so in this case probably I need a mutable C-array. But how would I create one? There are many questions regarding that: 1) How to create an mutable C-array? 2) How to add something to that mutable C-array? 3) How to remove something from that mutable C-array? 4) What memory management pitfalls must I be aware of? Must i free() it in my -dealloc method? And yes, I think this is something for the nice community wiki...

    Read the article

  • Doubt about a particular pattern of Javascript class definition

    - by fenderplayer
    Recently i saw the following code that creates a class in javascript: var Model.Foo = function(){ // private stuff var a, b; // public properties this.attr1 = ''; this.attr2 = ''; if(Model.Foo._init === 'undefined'){ Model.Foo.prototype = { func1 : function(){ //...}, func2 : function(){ //... }, //other prototype functions } } Model.Foo._init = true; } // Instantiate and use the class as follows: var foo = new Model.Foo(); foo.func1(); I guess the _init variable is used to make sure we don't define the prototypes again. Also, i feel the code is more readable since i am placing everything in a function block (so in oop-speak, all attributes and methods are in one place). Do you see any issues with the code above? Any pitfalls of using this pattern if i need to create lots of classes in a big project?

    Read the article

  • Migrating an Access Database into SharePoint 2007.

    - by Mike T
    To my surprise and delight I read that an adminsitrator can import (nearly directly) an Access 2007 database into a sharepoint site. Automagically, the database in transformed into lists and views with some table lookup thrown in for good measure. With Access 2007 installed on the client machine, even the forms and what not can still be reused. To me... this sounds to good to be true. Has anyone actually dones this? With all this good news, where is the bad stuff and pitfalls to this. Depending on the size of the database, wouldn't this some how "gum up the works" in the SharPoint database? Sources: http://madhurahuja.blogspot.com/2007/01/adding-data-to-sharepoint-l-ists-in.html http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/17745835-a861-4984-9f44-7291fdae7d07

    Read the article

  • Windows web server and SQL Server on same dedicated server

    - by asinc
    I'm currently trying to decide on the best approach to handle hosting a few moderate traffic websites for production e-commerce and online applications. We'd like to move to a dedicated server and are looking at this as the most likely machine: Quad Core Intel Core2Quad Q9550 Processor, 2.83 Ghz X 4, 4 GB Kingston Ram This would run Windows Web Server 2008 R2 x64 and potentially also Sql Server Web 2008 and SmarterMail server. Given that we already pay for a high-end VPS for development, testing, shared version control we'd like to avoid going with two servers for production. We'd like to avoid using shared sql server hosting and have thought of using the development server as the database server as an option too - but potentially a security risk due to use for development by internal and contract users. The questions are: - Do you feel there would be performance degradation by running this on the same machine? - Are there significant issues to be concerned about if we do this? We understand that best practice would be to run separate db and app servers but the volume of traffic is currently not that high and adding another server just for database is currently too costly. - What are others doing out there? Alternatively, would you recommend instead going with two separate VPS servers with 2GB RAM each on Hyper-v which would be about the same cost as the single dedicated server above? Thanks!

    Read the article

  • How to mount a LOFS in Solaris that doesn’t cross mountpoints

    - by jcea
    I need to access my "root" ZFS dataset to delete a file under "/var". But "/var" is overlayed by another ZFS dataset. Since these are system datasets I can't "umount" them while the machine is running. And I want to avoid to reboot the system in "failsafe" mode, since this is a production machine. Teorically ZFS would refuse to mount "/var" dataset over the underlying "/var", because it is not empty. But it works, possibly because they are system datasets mounted early in the boot process. But having the underlying "/var" not empty is preventing me to create an ABE (Alternate Boot Environment), so patching is risky, and I can't upgrade my system using Live Upgrade. The machine is remote. I have an IP KVM, but I rather prefer to avoid booting this machine in "failsafe" mode, if I can. I know there is a file in "/var/" because I can snapshot the "root" dataset and check it. But snapshots are read-only, so I can't get rid of the file. I tried "mkdir /tmp/zzz; mount -F lofs / /tmp/zzz", but when I go to "/tmp/zzz/var", I see the "/var" dataset, not the underlying "root" dataset. That is, the LOFS is crossing mountpoints. I would usually like it, but not this time!. Any suggestion, beside rebooting the machine in "failsafe" and mess with it thru the IP KVM?

    Read the article

  • How do I disable MEDIUM and WEAK/LOW strength ciphers in Apache + mod_ssl?

    - by superwormy
    A PCI Compliance scan has suggested that we disable Apache's MEDIUM and LOW/WEAK strength ciphers for security. Can someone tell me how to disable these ciphers? Apache v2.2.14 mod_ssl v2.2.14 This is what they've told us: Synopsis : The remote service supports the use of medium strength SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer medium strength encryption, which we currently regard as those with key lengths at least 56 bits and less than 112 bits. Solution: Reconfigure the affected application if possible to avoid use of medium strength ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More] Synopsis : The remote service supports the use of weak SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer either weak encryption or no encryption at all. See also : http://www.openssl.org/docs/apps/ciphers .html Solution: Reconfigure the affected application if possible to avoid use of weak ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More]

    Read the article

  • PBS batch jobs - the qalter command

    - by Ryan Budney
    I've got a giant computation running on a Scientific Linux cluster. At present I have over 600 jobs parked in the queue, waiting for processor time, while a few are running. I'm trying to use the qalter command on some of the idle but scheduled jobs. I'd like to schedule them for a later time, so that other users can jump part of the queue, sort of as an act of politeness. Is this doable? For example, JOBNAME 292399 is currently idle, scheduled to be run whenever a spot in the queue opens up. But if I run qalter -a 10051000 292398 followed by qrerun 292398 I get qrerun: Request invalid for state of job 292398.euler. From the qalter documentation, I thought 10051000 refers to tomorrow (oct 5th, 10am) but perhaps I'm misunderstanding something? If I'm going about this the wrong way, please let me know. The main thing I'm looking for is a command that's easily scriptable, so that I can modify when my queued tasks get run. qalter seems good for those purposes if I can get it working. I'd rather avoid running qdel and re qsubbing the computations, as there's a bookkeeping issue on which tasks to restart (vs which ones not to). I want to avoid that kind of bookkeeping. From googling around I notice some qalter commands have rather different date formats, but the above appears to be correct, as far as I can tell from the man docs. Any help would be appreciated.

    Read the article

  • haproxy: Is there a way to group acls for greater efficiency?

    - by user41356
    I have some logic in a frontend that routes to different backends based on both the host and the url. Logically it looks like this: if hdr(host) ends with 'a.domain.com': if url starts with '/dir1/': use backend domain.com/dir1/ elif url starts with '/dir2/': use backend domain.com/dir2/ # ... else if ladder repeats on different dirs elif hdr(host) ends with 'b.domain.com': # another else if ladder exactly the same as above # ... # ... else if ladder repeats like this on different domains Is there a way to group acls to avoid having to repeatedly check the domain acl? Obviously there needs to be a use backend statement for each possibility, but I don't want to have to check the domain over and over because it's very inefficient. In other words, I want to avoid this: use backend domain.com/url1/ if acl-domain.com and acl-url1 use backend domain.com/url2/ if acl-domain.com and acl-url2 use backend domain.com/url3/ if acl-domain.com and acl-url3 # tons more possibilities below because it has to keep checking acl-domain.com. This is particularly an issue because I have specific rules for subdomains such as a.domain.com and b.domain.com, but I want to fall back on the most common case of *.domain.com. That means every single rule that uses a specific subdomain must be checked prior to *.domain.com which makes it even more inefficient for the common case.

    Read the article

  • PDU management interface has low availability - product flaw or isolated issue

    - by DeanB
    Our colocation provider has supplied us with APC AP7932 switched 0U PDUs as part of several cabinets they provide us. We have had a lot of trouble with the network management aspect of these PDUs, which I'll describe below. We are moving to cage space in the same datacenter, and plan to provide our own PDUs, so I'd like to determine which enterprise-grade PDUs have been reliable performers from a remote management perspective. Our colo-provided PDUs are configured to support management via an SSL web UI and via telnet. We updated the firmware on all of them to the current version as of NOV2011. They respond to pings reliably, and we have no reason to suspect a network layer issue. However, we experience frequent hangs, timeouts, disconnects, and general unavailability from the embedded management host in all of the PDUs. We occasionally have to restart the microcontroller on the PDU to recover from what appears to be an occasional hard fault. The outlets stay powered (thankfully), but the management aspect is so unreliable that it has become an ops liability - we can't be confident that we could get into the PDU to power cycle a host if we needed to. We have 3 PDUs that all exhibit identical behavior. There are many manufacturers of enterprise-grade 0U switched PDUs, all with comparable features. If I looked at the datasheet for our current PDUs, they would appear to be a good fit -- only with the benefit of suffering through using them do we know to avoid them. I'd like to avoid picking a PDU that looks fine on paper, but has similar reliability issues. What has been others' experience with switched PDUs? Is this level of flakiness normal?

    Read the article

  • redirect landing without changing url

    - by Gerard
    I'm working with Apache and Joomla. I would like to have some URIs that land to the same landing page, but avoiding the URL changes. So, for example: examplecom/luggage/delay/iberia examplecom/luggage/delay/aireuropa examplecom/luggage/delay/ryanair Need to point to examplecom/luggage/delay but the address bar must show the different airline companies. I've been messing arround with mod_rewrite with the following codes: 1: RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-iberia [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-vueling [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-ryanair [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa [NC,OR] RewriteRule ^(.*)$ /equipaje/problemas-de-equipaje/perdida-equipaje [P] 2: RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-iberia equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-vueling equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-ryanair equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] The idea is to have different urls for AdWords but then I'll avoid the indexing of that company landpages to avoid replicated content. I've been searching for several examples and trying to understand how the module works, but until now I haven't acomplished with that... Is it possible to do that with mod_rewrite? Many thanks!

    Read the article

  • SQL Server replication and load balance

    - by Ahmed Galal
    I'm running a web service that serves a mobile app on IIS 8 and SQL Server 2014, my service has a massive load and i'm trying to improve performance, most of the load is happening on SQL. i don't think i have a bottleneck, my processor and ram is up to the max and i think my code is not that bad, am already using memcached and other stuff to avoid hitting SQL too much. i know i can always upgrade the server hardware but i already have a spare server that i would like to use, so i was thinking to split the SQL load on the 2 servers. What i was thinking of is to setup replication on the other server and do some load balancing, but am not sure how to do the load balance. I know i can adjust my code to hit the other server for some queries but i was hoping to find a solution that avoid changing my code. So my question is, What are the ways of doing load balancing between 2 SQL servers ? I would appreciate suggestions or best practices or some directions. Thanks.

    Read the article

  • Linux - preventing an application from failing due to lack of disk space [migrated]

    - by Jernej
    Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more context I have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its own file. The program is running on Linux and I do not have root access to the machine. The program is CPU intensive and has been running for months. It still has a few days to write all the data. 40% of the data in the files is redundant and can be removed after a quick test. The system on which the program is running only has 30GB of remaining disk space and at the current rate of work it will surely be hogged before the program finishes. Given the above points I see the following solutions with respective problems Given that the process number i is writing to file_i, is it safe to move file_i to an external location? Will the OS simply create a new instance of file_i and write to it? I assume moving the file would remove it and the process would end up writing to a "dead" file? Is there a "command line" way to stop 4 of the 5 spawned workers and wait until one of them finishes and then resume their work? (I am sure one single worker thread would avoid hogging the disk) Suppose I use CTRL+Z to freeze the main process. Will this stop all the other processes spawned by multiprocessing.Pool? If yes, can I then safely edit the files as to remove the redundant lines? Given the three options that I see, would any of them work in this context? If not, is there a better way to handle this problem? I would really like to avoid the scenario in which the program crashes just few days before its finish.

    Read the article

  • What's wrong with my .htaccess? Trying to simplify actual code

    - by AlexV
    This is my actual .htaccess: #If the requested URI does not end with an extension RewriteCond %{REQUEST_URI} !\.(.*) #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] #If the requested URI ends with .php* RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Since all conditions are compatibles except the 1st ones (no extension and *.php* match) all I should have to do is to add the [OR] condition to these 2 lines, but when I'm adding it it's not working (my no extension rule don't work anymore). This is my new (not working) code: #If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Hopefully someone will be able to clarify this issue... I guess I don't fully understand the use of [OR]. Thanks!

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Ubuntu 12.10 64bit host reboots when trying to install any guest system using VirtualBox

    - by gts123
    I am having a really nasty problem with VirtualBox as everytime I try to install any guest OS(using ISO file as CD for installation media), the installation starts normally but as soon as it is about to start either installing to virtual hard drive or loading(e.g. as LiveFS) it causes the host system to reboot abruptly. Config is as below: Host system: Ubuntu 12.10 64bit - Intel® Core i7-2640M CPU @ 2.80GHz × 4 Virtualbox version: 4.1.18_Ubuntu r78361 Guest OS systems tried: 32bit version of FreeBSD 9, Debian 6, Tails 0.14 VM setup Tried to have the minimal setup necessary just in case it would avoid for each system to make sure I'd avoid conflicts, but to no avail. I've tried different values and combinations of the below but the problem still persists: Shared Clipboard: Disabled Show in fullscreen/seamless: Disabled Remember runtime changes: DIsabled Base Memory: 2048 MB Chipset: PIIX3 IO APIC: Disabled EFI: Disabled Absolute Pointing device: Disabled Processor(s): 1 CPU PAE/NX: Disabled VT-x/AMD-V: Disabled Video Memory: 12 MB 3d/2d acceleration: Disabled Storage IDE COntroller: PIIX3 (same as chipset instead of PIIX4) Use host I/O cache: No Audio: disabled Network adapter: NAT USB controller: disabled No shared folder Also another sideeffect of the reboot is that it appears that it does not log any information in the error log files; not making things any easier. Please help.

    Read the article

  • Power Outage Interrupted Upgrade from Windows Vista Ultimate to 7 Ultimate, Reverted to Vista, Now Vista is failing... What Next?

    - by tednewk
    I was in the midst of what seemed to be a successful upgrade from Vista Ultimate to 7 Ultimate when there was a brief blackout. The upgrade failed and Windows reverted back to Vista. Now Vista is very slow to boot, has problems waking back-up from inactivity and quickly loses it's wireless connection. The wake-up problem manifests itself as the mouse is clearly shown on a black screen but I have no access to the Desktop or Taskbar or Explorer. Even Alt-Ctrl-Delete doesn't seem to work. No task menu, no reboot. Hitting the reset button reboots the machine with the usual Black Screen warnings offering Safe Mode. I tried to do a system restore to a point before the upgrade. That didn't seem to work. My guess is that my system is a mutant with parts of Vista and parts of 7 crashing each other. I would like avoid a clean install if at all possible to avoid reinstalling other software. What should I try now? My thoughts are: My a system back-up to lock the computer in place Trying a second 7 upgrade If that appears to be working make another back-up If not reload back-up and try a repairing Vista from DVD. If that appears to work make another back-up, let system stablize about a week then try 7 install again If that doesn't work are there any other options to try before settling for a clean install? Another complication, I am doing this by "remote control". I'm traveling with my job and I'll be talking my son through it over the phone. (Kind of like the landing the 747 cliche from all the 70's adventure shows!) So is there a way of simplifying the steps? Thanks Ted

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >