Search Results

Search found 22968 results on 919 pages for 'stuck again'.

Page 45/919 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Hard-drive will randomly fail to load GRUB. Booting a live USB/CD fixes the issue temporarily

    - by Usagi
    I am running 12.04 64-bit and am dual booting with Win7, for full disclosure, although I suspect that has nothing to do with my problem. Occasionally the boot-loader(GRUB) will fail to load and I will be presented with a black screen with a single blinking line. There is no apparent pattern although I suspect there is one and it is related to a program I am running. This has happened to me eight out of ten power cycles now and I can fix it consistently, however, I have no idea why it happens. My current fix is to boot a live CD (I've tried both KNOPPIX and Ubuntu with the same result) and that's it. Somehow booting with the live CD is enough to "wake-up" my hard drive. I then reboot and GRUB magically appears again. So what is going on? Is it possible that a program is corrupting my MBR and the live CD is restoring it? How can I narrow down the possibilities? Thanks. Additional: This is still a problem. I'm convinced now that it is not hardware related as I've spent the last month and several boot cycles on Windows without a hiccup. Recently when I started using Ubuntu again the problem started again. I am more interested in figuring out what is going on rather than actually fixing the problem. Are there any tools, logs, etc. I can use to unravel this mystery?

    Read the article

  • Windows 8 Metro Applications not working

    - by cgoddard
    I have been using Windows 8 now for the past 2 weeks, and have had some interesting experiences with it. At first, nothing on it worked, but after uninstalling Lenovo One Key Theater, and it worked again for a bit. But then the metro apps stopped working, but after uninstalling Kaspersky, they started working again. But now it is doing it again. Whenever I try to open a metro app, the app screen comes up (lets use store as an example), but the icon and the spinner move from the centre of the screen to the top left, and just spins on for ever and never loads, does anyone know what could be causing it, or at least how to check it? Out of interest, the camera app still works fine.

    Read the article

  • How I can be alerted if an application is not running in Windows 2008 R2?

    - by Magnetic_dud
    I have a critical application that I need to have it running on my server. Unfortunately it's poorly coded and it keeps crashing. If it's not running it's a big problem, but I can't use a simple application monitor like this because if the app crashes I need to input the configuration again - so I can't just run it again, I have to RDP into the server and manually start it again. So I need a monitor that sends me an email if the process has been stopped. Anyone knows a program that can do that job? I couldn't find it

    Read the article

  • How to stop Vista changing folder views?

    - by DisgruntledGoat
    In Windows Vista, I've set the "list view" to apply to all folders under folder options. This works fine until I change the view of any one folder to something else (say, extra large thumbnails). Then suddenly, every folder uses that extra large view. But if I switch it back to list view, this never gets applied for all folders - they're still using the extra large view. Obviously I can go to folder options AGAIN and apply list view to all folders AGAIN, but it makes no sense why this happens in the first place. Changing the view once applies it to all folders, but doing exactly the same again doesn't... is there a way around this?

    Read the article

  • Nagios state transition and event handler issue

    - by Dattatray
    We are using Nagios to check duplicate processes. define service { use local-service host_name xxx service_description xxx Duplicate Processes check_interval 1 max_check_attempts 1 contact_groups admins event_handler restart-dependent-processes check_command check_procs_duplicate!2!3!2!2!2 } check_procs_duplicate checks if there are any duplicate processes and returns the state - e.g. CRITICAL. The event handler kills the duplicate processes and it's dependent processes and starts one instance of the process and dependent process. At the end of this again Nagios checks if there are any duplicate processes and sets the state accordingly - OK/WARNING/CRITICAL. The event handler takes more time to start the processes and during this time if someone manually starts the process, the state will remain in CRITICAL itself. During the next interval, Nagios will again check for duplicate processes and it will find it again CRITICAL. The event handler will not get executed now, as the previos and current both the states are CRITICAL. Any pointers about how to fix this issue?

    Read the article

  • Using resize to getScript if above x pixels (jQuery)

    - by user1065573
    So I have an issue with this script. It basically is loading the script on any resize event. I have it working if lets say... - User has window size above 768 (getScipt() and content by .load() in that script) - Script will load (and content) - User for some reason window size goes below 768 (css hides #div) - User re-sizes again above 768, And does not load the script again! (css displays the div) Great. Works right. Now.. (for some cray reason) - User has window size below 768 (Does NOT getScript() or nothing) - User re-sizes above 768 and stops at any px (getScript and content) - User re-sizes again anything above 768 (It getScript AGAIN) I need it to only get script once. Ive used some code from here jQuery: How to detect window width on the fly? (edit - i found this, which he has the same issue of loading a script once.) Problem with function within $(window).resize - using jQuery And others i lost the links to :/ When i first found this problem I was using something like this, then i added the allcheckWidth() to solve problem. But it does not. var $window = $(window); function checkWidth() { var windowsize = $window.width(); ////// LETS GET SOME THINGS IF IS DESKTOP var $desktop_load = 0; if (windowsize >= 768) { if (!$desktop_load) { // GET DESKTOP SCRIPT TO KEEP THINGS QUICK $.getScript("js/desktop.js", function() { $desktop_load = 1; }); } else { } } ////// LETS GET SOME THINGS IF IS MOBILE if (windowsize < 768) { } } checkWidth(); function allcheckWidth () { var windowsize = $window.width(); //// IF WINDOW SIZE LOADS < 768 THEN CHANGES >= 768 LOAD ONCE var $desktop_load = 0; if (!$desktop_load) { if (windowsize < 768) { if ( $(window).width() >= 768) { $.getScript("js/desktop.js", function() { $desktop_load = 1; }); } } } } $(window).resize(allcheckWidth); Now im using something like this which makes more sense? $(function() { $(window).resize(function() { var $desktop_load = 0; //Dekstop if (window.innerWidth >= 768) { if (!$desktop_load) { // GET DESKTOP SCRIPT TO KEEP THINGS QUICK $.getScript("js/desktop.js", function() { $desktop_load + 1; }); } else { } } if (window.innerWidth < 768) { if (window.innerWidth >= 768) { if (!$desktop_load) { // GET DESKTOP SCRIPT TO KEEP THINGS QUICK $.getScript("js/desktop.js", function() { $desktop_load + 1; }); } else { } } } }) .resize(); // trigger resize event }) Ty for future response. Edit - To give an example, in desktop.js i have a gif that loads before "abc" content, that gets inserted by .load(). On resize this gif will show up and the .load() in desktop.js will fire again. So if getScript was only being called once, it should not be doing anything again in desktop.js. Confusing?

    Read the article

  • MINA: Performing synchronous write requests / read responses

    - by Matt Huggins
    I'm attempting to perform a synchronous write/read in a demux-based client application with MINA 2.0 RC1, but it seems to get stuck. Here is my code: public boolean login(final String username, final String password) { // block inbound messages session.getConfig().setUseReadOperation(true); // send the login request final LoginRequest loginRequest = new LoginRequest(username, password); final WriteFuture writeFuture = session.write(loginRequest); writeFuture.awaitUninterruptibly(); if (writeFuture.getException() != null) { session.getConfig().setUseReadOperation(true); return false; } // retrieve the login response final ReadFuture readFuture = session.read(); readFuture.awaitUninterruptibly(); if (readFuture.getException() != null) { session.getConfig().setUseReadOperation(true); return false; } // stop blocking inbound messages session.getConfig().setUseReadOperation(false); // determine if the login info provided was valid final LoginResponse loginResponse = (LoginResponse)readFuture.getMessage(); return loginResponse.getSuccess(); } I can see on the server side that the LoginRequest object is retrieved, and a LoginResponse message is sent. On the client side, the DemuxingProtocolCodecFactory receives the response, but after throwing in some logging, I can see that the client gets stuck on the call to readFuture.awaitUninterruptibly(). I can't for the life of me figure out why it is stuck here based upon my own code. I properly set the read operation to true on the session config, meaning that messages should be blocked. However, it seems as if the message no longer exists by time I try to read response messages synchronously. Any clues as to why this won't work for me?

    Read the article

  • How do I simulate "Is In" using Linq2Sql

    - by flipdoubt
    I often find myself with a list of disconnected Linq2Sql objects or keys that I need to re-select from a Linq2Sql data-context to update or delete in the database. If this were SQL, I would use IS IN in the SQL WHERE clause, but I am stuck with what to do in Linq2Sql. Here is a sample of what I would like to write: public void MarkValidated(IList<int> idsToValidate) { using(_Db.NewSession()) // Instatiates new DataContext { // ThatAreIn <- this is where I am stuck var items = _Db.Items.ThatAreIn(idsToValidate).ToList(); foreach(var item in items) item.Validated = DateTime.Now; _Db.SubmitChanges(); } // Disposes of DataContext } Or: public void DeleteItems(IList<int> idsToDelete) { using(_Db.NewSession()) // Instatiates new DataContext { // ThatAreIn <- this is where I am stuck var items = _Db.Items.ThatAreIn(idsToValidate); _Db.Items.DeleteAllOnSubmit(items); _Db.SubmitChanges(); } // Disposes of DataContext } Can I get this done in one trip to the database? If so, how? Is it possible to send all those ints to the database as a list of parameters and is that more efficient than doing a foreach over the list to select each item one at a time?

    Read the article

  • How do I simulate "In" using Linq2Sql

    - by flipdoubt
    I often find myself with a list of disconnected Linq2Sql objects or keys that I need to re-select from a Linq2Sql data-context to update or delete in the database. If this were SQL, I would use IN in the SQL WHERE clause, but I am stuck with what to do in Linq2Sql. Here is a sample of what I would like to write: public void MarkValidated(IList<int> idsToValidate) { using(_Db.NewSession()) // Instatiates new DataContext { // ThatAreIn <- this is where I am stuck var items = _Db.Items.ThatAreIn(idsToValidate).ToList(); foreach(var item in items) item.Validated = DateTime.Now; _Db.SubmitChanges(); } // Disposes of DataContext } Or: public void DeleteItems(IList<int> idsToDelete) { using(_Db.NewSession()) // Instatiates new DataContext { // ThatAreIn <- this is where I am stuck var items = _Db.Items.ThatAreIn(idsToValidate); _Db.Items.DeleteAllOnSubmit(items); _Db.SubmitChanges(); } // Disposes of DataContext } Can I get this done in one trip to the database? If so, how? Is it possible to send all those ints to the database as a list of parameters and is that more efficient than doing a foreach over the list to select each item one at a time?

    Read the article

  • Validation Logic

    - by user2961971
    I am trying to create some validation for a form I have. There are two text boxes and two radio buttons on the form. My logic for this validation I know is a little rusty at the moment so any suggestions would be great. Here is the code for what I have so far: Keep in mind that the int errors is a public variable in the class Start Button code: private void btnStart_Click(object sender, EventArgs e) { errors = validateForm(); //Here I want the user to be able to fix any errors where I am little stuck on that logic at the moment //validate the form while (errors > 0) { validateForm(); errors = validateForm(); } } ValidateForm Method: private int validateForm() { errors = 0; //check the form if there are any unentered values if (txtDest.Text == "") { errors++; } if (txtExt.Text == "") { errors++; } if (validateRadioBtns() == true) { errors++; } return errors; } ValidateRadioBtns Method: private Boolean validateRadioBtns() { //flag - false: selected, true: none selected Boolean blnFlag = false; //both of the radio buttons are unchecked if (radAll.Checked == false && radOther.Checked == false) { blnFlag = true; } //check if there is a value entered in the text box if other is checked else if(radOther.Checked == true && txtExt.Text == "") { blnFlag = true; } return blnFlag; } Overall I feel like this can somehow be more stream lined which I am fairly stuck on. Also, I am stuck on how to ensure the user can return to the form, fix the errors, and then validate again to ensure said errors are fixed. Any suggestions would be greatly appreciated since I know this is such a nooby question.

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • How would you implement this "WorkerChain" functionality in .NET?

    - by Dan Tao
    Sorry for the vague question title -- not sure how to encapsulate what I'm asking below succinctly. (If someone with editing privileges can think of a more descriptive title, feel free to change it.) The behavior I need is this. I am envisioning a worker class that accepts a single delegate task in its constructor (for simplicity, I would make it immutable -- no more tasks can be added after instantiation). I'll call this task T. The class should have a simple method, something like GetToWork, that will exhibit this behavior: If the worker is not currently running T, then it will start doing so right now. If the worker is currently running T, then once it is finished, it will start T again immediately. GetToWork can be called any number of times while the worker is running T; the simple rule is that, during any execution of T, if GetToWork was called at least once, T will run again upon completion (and then if GetToWork is called while T is running that time, it will repeat itself again, etc.). Now, this is pretty straightforward with a boolean switch. But this class needs to be thread-safe, by which I mean, steps 1 and 2 above need to comprise atomic operations (at least I think they do). There is an added layer of complexity. I have need of a "worker chain" class that will consist of many of these workers linked together. As soon as the first worker completes, it essentially calls GetToWork on the worker after it; meanwhile, if its own GetToWork has been called, it restarts itself as well. Logically calling GetToWork on the chain is essentially the same as calling GetToWork on the first worker in the chain (I would fully intend that the chain's workers not be publicly accessible). One way to imagine how this hypothetical "worker chain" would behave is by comparing it to a team in a relay race. Suppose there are four runners, W1 through W4, and let the chain be called C. If I call C.StartWork(), what should happen is this: If W1 is at his starting point (i.e., doing nothing), he will start running towards W2. If W1 is already running towards W2 (i.e., executing his task), then once he reaches W2, he will signal to W2 to get started, immediately return to his starting point and, since StartWork has been called, start running towards W2 again. When W1 reaches W2's starting point, he'll immediately return to his own starting point. If W2 is just sitting around, he'll start running immediately towards W3. If W2 is already off running towards W3, then W2 will simply go again once he's reached W3 and returned to his starting point. The above is probably a little convoluted and written out poorly. But hopefully you get the basic idea. Obviously, these workers will be running on their own threads. Also, I guess it's possible this functionality already exists somewhere? If that's the case, definitely let me know!

    Read the article

  • Getting caught in loops - R

    - by user334898
    I am looking at whether or not certain 'systems' for betting really do work as claimed, namely, that they have a positive expectation. One such system is based on the rebate on loss. You basically have a large master pot, say $1 million. Your bankroll for each game is $50k. The way it works, is as follows: 1) Start with $50k, always bet on banker 2) If you win, add the money to the master pot. Then play again with $50k. 3) If you lose(now you're at $30k) play till you either: (a) hit 0, you get a rebate of 10%. Begin playing again with $50k+5k=$55k. (b) If you win more than the initial bankroll, add the money to the master pot. Then play again with $50k. 4) Continue until you double the master pot. I just cant find an easy way of programming out the possible cases in R, since you can eventually go down an improbable path. For example, you start at 50k, lose 20, win 19, now you're at 49, now you lose 20, lose 20, now youre at 9, you either lose 9 and get back 5k or you win and this cycle continues until you either end up with more than 50k or hit 0 and get the rebate on the 50k and start again with $50k +5k. Here's some code i started, but i havent figured out a good way of handling the cases where you get stuck and keeping track of the number of games played. Thanks again for your help. Obviously, I understand you may be busy and not have time. p.loss <- .4462466 p.win <- .4585974 p.tie <- 1 - (p.win+p.loss) prob <- c(p.win,p.tie,p.loss) bet<-20 x <- c(19,0,-20) r <- 10 # rebate = 20% br.i <- 50 br<-200 #for(i in 1:100){ # cbr.i<-0 y <- sample(x,1,replace=T,prob) cbr.i<-y+br.i if(cbr.i > br.i){ br<-br+(cbr.i-br.i); cbr.i<-br.i; }else{ y <- sample(x,2,replace=T,prob); if( sum(y)< cbr.i ){ cbr.i<-br.i+(1/r)*br.i; br<-br-br.i} cbr.i<-y+ }else{ cbr.i<- sum(y) + cbr.i; }if(cbr.i <= bet){ y <- sample(x,1,replace=T,prob) if(abs(y)>cbr.i){ cbr.i<-br.i+(1/r)*br.i } }

    Read the article

  • Installing .NET Framework 4 Client Profile breaks Windows Update

    - by Richard
    I have a Samsung NC-10 netbook with a fresh install of Windows 7 Home Premium 32-bit (it only had 2GB). If Microsoft .NET Framework 4 Client Profile is installed on it, Windows Update will always return error code 8024402F ("Windows Update encountered an unknown error"). As soon as I uninstall it, Windows Update works just fine again. Out of the four computers in this house, only this netbook has the problem. My question is: How can I get the .NET Framework 4 Client Profile installed on my netbook and continue to have a functioning Windows Update? ----- More information ----- The hard-drive recently died on my netbook so I replaced it with a nice new SSD and did a fresh installation of Windows 7 Home Premium (SP1) - along with all the updates. At some point I found that, when I ran Windows Update, I was greeted with error code 8024402F ("Windows Update encountered an unknown error"). Looking in C:\Windows\WindowsUpdate.log, I saw the following issue: WARNING: ECP: Failed to validate cab file digest downloaded from http://download.windowsupdate.com/msdownload/update/software/dflt/2012/02/4913552_4a5c9563d1f58c77f30d0d5c9999e4b8bff3ab21.cab with error 0x80091007 WARNING: ECP: This roundtrip contained some optimized updates which failed. New Update count = 0, Old Count = 3 FATAL: ProcessCoreMetadata did not return any update to be committed WARNING: Sync of Updates: 0x8024402f WARNING: SyncServerUpdatesInternal failed: 0x8024402f When I downloaded the CAB from the URL listed and opened it, it contained a file called 4913552.txt. A search on Google suggested that it's related to Microsoft .NET Framework 4 Client Profile. Other people had reported problems with it breaking Windows Update, but they were running Windows XP. I tried the steps outlined on the Microsoft site for this error code, but it reported that there was nothing wrong. I also found this superuser question, I tried all the answers listed but none of them made any difference. My router doesn't block ActiveX, changing my internet settings in IE made no difference, assuming it was a corrupted update repository didn't do anything (except wipe my update history), my date and time was correct, switching to Google's DNS didn't work and neither did disabling IPv6. Figuring that this update was corrupted, I repaired it and nothing changed. In desperation I un-installed it and Windows Update started working again! Brilliant! I then downloaded the full version from the Microsoft website, installed it and, thankfully, Windows Update continued to work just fine. A week later I turn on my netbook and Windows Update is broken again with exactly the same error message and log entries as before. Repairing .NET Framework 4 Client Profile did nothing, removing it entirely solved the problem again. Thinking this might be some odd Windows installation issue, I formatted the hard-drive and re-installed Windows. Same problem as before - as soon as .NET Framework 4 Client Profile ended up on the netbook, Windows Update stopped working and reported error 8024402F. As soon as it was un-installed, everything worked just fine again. There are three other machines in this house and all of them have working Windows Update and this Client Profile. Does anyone know why it causes this netbook to break and, more importantly, how I can fix it?

    Read the article

  • Hard drive after PCB swap strange stuff

    - by ramyy
    I’ve done a PCB swap to my HDD. The HDD model is: WD6400AAKS-00A7B2. The original PCB PN matches the new one (first three letter groups), though the cache mismatches (16MB original, 8MB new). The Hardware store that made the swap told me it was hard to do the swap, they have done firmware adaptation. I can see that this firmware version does not match the original, (01.03B01 original, 05.04E05 new). Still I can see that the serial number and model of the drive is correct, the hard drive appeared normal in the BIOS, all the partitions show and everything appears normal. I have encountered three things though, I have left the drive non operated for 2-3 weeks after the swap to avoid corrupting the data or anything else the new PCB might cause, until I buy a new drive and backup the data. I got a drive, and when I powered the old drive manually (I have a laptop, I use a normal desktop power supply and a USB/SATA connector), I heard the motor start and I could hear ticking as if the motor’s somehow struggling to start, and then the motor sound starts again then the ticking, and so on.. I tried powering again it happened again. The third time it started normally and I could see everything normally. I took the chance and copied all the data over to the new drive. When I was done, I powered off the drive (after more than 25 hours of continuous operation), tried to power it up again and it did so normally, and so are the times I powered it up later; but I got very suspicious now. What could be the problem here? And what happened new, it used to power normally after the swap directly? The second thing that happened is that I found size differences with some files; some include movies, songs, (.iso) files for games, and programs. I could find the size is the same, but size on disk is a little more on the new drive for these files. . I’ve tried some of those files (with size differences) they worked fine. They are not too much but still make you suspicious of the integrity of the data copied; one cannot try if all files are working for about (580 GB) worth of data. I tried copying these files on the same partition they exist of the old drive; they are the same in size as when copied to the new drive (allocation unit size not the issue). I took an image of a partition (sector by sector including empty ones) and when I explore it, these file sizes are equal to the original (old drive); I copy them anywhere else their size on disk, increases, i.e becomes equal to the ones I copy from the old drive itself anywhere. Why the size difference and can one trust the integrity of the data?? The third thing is that when I connect my new external USB HDD, the partitions of the old HDD unmount and then mount again. Connected are: (USB mouse + Old HDD) then external HDD. Why that happens?? Considering the following: I compared the SMART reports from after the swap directly and after the copying, no error readings or reallocated sectors where reported. Here they are: http://www.image-share.com/ijpg-1939-219.html I later ran both WD data life guard tests and they came out passed. I’m worried for this drive since I must be sure the data is fine and safe on the new one, and I will consider it backup for the new one, since you can’t trust anything anymore. I hope you can forgive me for the length of the post, but couldn’t ignore any of the details, this hard drive contains very important data to me and I have to deal with the situation with great care.

    Read the article

  • Test and Report Add-on Compatibility in Firefox

    - by Asian Angel
    Now that the new version of Firefox is out you probably have a favorite extension or two that has not updated yet. You can get that extension working again, test it, and report back to Mozilla on how well it does with the Add-on Compatibility Reporter extension. Before For our example we chose a great extension that unfortunately has not been updated yet. As you can see here Firefox is refusing to let the extension install. After As soon as you install Add-on Compatibility Reporter you will be presented with an information page on how the extension works and what you can do with it. You should definitely take a moment to read this as it is very helpful. After trying our non-compatible extension again we were able to proceed with the install process. Notice at the bottom that “compatibility checking” has been overridden. Success! As soon as we restarted our browser it was easy to see the “non-compatible icon” in the “Add-ons Manager Window”…but the extension did install though (terrific!). Clicking on the extension’s entry will reveal a new button in the lower right corner. Using the “Compatibility Drop-Down Menu” you can report if the extension is working as well as before or if it is actually having problems. The extension that we used for our example had no problems whatsoever so good news there. Whichever option you choose you will be presented with a small “Report Window” with information about the extension, your browser’s version number, and your operating system. Click “Submit Report” to send it on its’ way. You will see a confirmation message letting you know that your report was successfully submitted. While the extension itself has not been altered in any form at least you have it working again and have helped verify whether it still works well or not. Notice the “notation” present now in place of the “Compatibility Button” that lets you know that you have already taken care of that particular extension. Looking great… Conclusion If you have a favorite extension that you miss using in the newest release of Firefox then this is definitely an extension to add to your browser. Not only will your extension start working again but you can let Mozilla know how well it is working and (hopefully) help get the extension updated. Links Download the Add-on Compatibility Reporter extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsUsing Windows 7 or Vista Compatibility ModeMysticgeek Blog: Generate A System Health Report In VistaCheck Extension Compatibility for Upcoming Firefox ReleasesMake Safari Stop Crashing Every 20 Seconds on Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • How to move complete SharePoint Server 2007 from one box to another

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three scary words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   Backup all the databases Install and configure SharePoint on new environment Deploy all solution (WSP Files) globally to destination server- for installing features attached to the solutions Install all the custom features Deploy/Copy custom pages/files which are added to the “12Hive” folder later Restore SSP Restore My Site Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force Check and delete the web application associated with SSP if it exists. Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. Check and delete IIS site as well as application pool associated with SSP if it exists. Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Delete all the databases associated/related to SSP from SQL Server. Reset IIS. Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • Cannot find protocol declaration in Xcode

    - by edie
    Hi.. I've experienced something today while I'm building my app. I've added a protocol in my object and assign delegate object on it. I added the protocol on the object that will implement the protocol's method. I've added it in this way as usual @interface MyObject : UIViewController <NameOfDelegate> But the Xcode says that my the protocol declaration cannot be found. I've check my code but I've declared this protocol. I've try to assign MyObject as delegate of other Object. I've edit my code like this @interface MyObject : UIViewController <UITableViewDelegate,NameOfDelegate> but Xcode say again that it cannot found declaration of protocol of NameOfDelegate. I've tried to delete the NameOfDelegate on my code and add assign MyObject as delegate of other object and it goes like this. @interface MyObject : UIViewController <UITableViewDelegate,UITabBarDelegate> No errors have been found. Then I've tried again to add again my NameOfDelegate in the code @interface MyObject : UIViewController <UITableViewDelegate,UITabBarDelegate,NameOfDelegate> At that time Xcode did not find any error on my code. So I tried again to remove the UITableViewDelegate and UITabBarDelegate on my code. @interface MyObject : UIViewController <NameOfDelegate> At that time No error had found but that was the same code I've write before. What should probably the cause of that stuff on my code? Thanks...

    Read the article

  • About lifecycle of activities

    - by janfsd
    In my application I have several activities, the main screen has 4 buttons that each start a different activity. So one of them is a search activity, once it searches it shows you a result activity. This result activity can be reached from other activities, so in general something like this: Main activity -> Search activity -> Result activity Main acitivty -> someother activity -> Result activity Now, if I have reached this result activity and press back once or twice, and after that press the Home key it will show the Home screen. But if I want to get back to my application by holding the Home button and clicking on my app it will always go back to the Result activity, no matter which activity was the last one I was using. And if I press again back it will take me back to the Home screen. If I try it again it will take me again to the Result activity. The only way to avoid this is to start the application by clicking on the app's icon. And this takes me to the last activity I was using and it remembers the state so if I press back again it doesn't take me to the Home screen, instead to the activity before it. To illustrate this: Main activity -> Search activity -> result activity --back--> Search activity --Home Button--> Home Screen --Hold Home and select the app --> Result activity --back--> Home Screen --Click application icon--> Search activity --back--> Main activity Another thing that happens is that if I press the Home button while on the Result activity, and start the app by clicking the icon, it will take me to the activity prior the the Result one. Why is this happening? Any workarounds?

    Read the article

  • To Delete a tape from the ACSLSlibrary

    - by Senthil Kumar
    Hi Anyone there can help me out for an issue as below: There was a stuck tape in the drive and so the stuck tape was removed, but i need to logically delete the tape entry from NB, so that the same media can be inserted back for operations. Netback thinks that the tape is still in that location, hence it should be removed so that the entry is not there and NB does not recognise that the tape in that location, so the same tape can be taken in through inventory. The NB used is NB5.1 Any command to delete this entry, this is a clustered based Environment (Active/Passive), and we use a ACSLS library (Physical) as well a Switch-SN6000(Logical) Kindly help me out as when we tried to delete the media from GUI it said- Could not delete- Cannot delete assigned volume (92).

    Read the article

  • How to permanently remove xcuserdata under the project.xcworkspace and resolve uncommitted changes

    - by JeffB6688
    I am struggling with a problem with a merge conflict (see Cannot Merge due to conflict with UserInterfaceState.xcuserstate). Based on feedback, I needed to remove the UserInterfaceState.xcuserstate using git rm. After considerable experimentation, I was able to remove the file with "git rm -rf project.xcworkspace/xcuserdata". So while I was on the branch I was working on, it almost immediately came back as a file that needed to be committed. So I did the git rm on the file again and just switched back to the master. Then I performed a git rm on the file again. The operation again removed the file. But I am still stuck. If I try to merge the branch into the master branch, it again says that I have uncommitted changes. So I go to commit the change. But this time, it shows UserInterfaceState.xcuserstate as the file to commit, but the box is unchecked and it can't be checked. So I can't move forward. Is there a way to use 'git rm' to permanently remove xcuserdata under the project.xcworkspace? Help!! Any ideas?

    Read the article

  • VMWare steals IP addresses [closed]

    - by Ishan Amin
    I'm having a peculiar problem, that I think I have narrowed down to VMware. For the past one year, every once in a while we lose internet connection and not all users (about 10 users) go down at the same time, its usually one-by-one. First someone will call me and say "Internet is down" and then we would go reset the router and modem and switch and it would be working again for a while, then go down again without any pattern or replicatable sequence. We'd go repeat the steps again to get everyone in the office running again. We called our Internet Service Provider and they constantly say, We see your modem and we see your router and from thier end everything is OK. we replaced our router and switch and modem, twice! Last friday, it dawned upon me, that everytime we turn on a VMware machine, this sequence of taking everyone down starts, which also explains the message that my users get for "IP Conflict Found" So we do alot of VMware testing and lo and behold, it takes my Internet down. My Yahoo and Gtalk would continue working but www is down when the VMware machines are started. I do use bridged networking to all the VMware machines, but I dont know what else to set it at. now, sorry for this long rambling but anyone have any clue on how to stop this? thanks IA

    Read the article

  • .htaccess redirect for www in parent folder and children react

    - by ServerChecker
    We were having a problem with the Norton seal not showing up on our affiliate marketing landing pages (landers). Turns out, the Norton seal was super picky about the "www." prefix. I had folder paths like /lp/cmpx where x was a number 1-100 and indicated advertising campaign number. So, to remedy this, I stuck this in my .htaccess file right after the RewriteEngine On line: RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com/lp/cmp1/$1 [R=302,QSA,NC,L] Trouble is, I had to do that under every campaign folder, changing cmp1 to whatever the folder name was. Therefore, my question is... Is there a way I can do this with an .htaccess file under the parent folder (/lp in this case) and it will work for each of the children? EDIT: Note that I stuck an .htaccess file in /lp just now to test: RewriteEngine On RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com/lp/$1 [R=302,QSA,NC,L] This yielded no effect to the /lp/cmpx folders underneath, to my dismay.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >