Search Results

Search found 8381 results on 336 pages for 'bad neighbor'.

Page 33/336 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • On installing nvidia drivers on 12.10 I get "Bad return status for module build on kernel: 3.5.0-19-generic (x86_64)"

    - by james
    New Ubuntu user - just recently made the mistake of trying a different nvidia driver. I'd managed to get the last (nvidia-current) one working through software sources a few weeks ago. The other day I tried to cross over to nvidia-experimental-310 and this produced a system error. Swapping back and forth between proprietary drivers now always causes an error and I can't get any of them to work. Installing through the terminal I get this error message every time: Building initial module for 3.5.0-19-generic Error! Bad return status for module build on kernel: 3.5.0-19-generic (x86_64) Consult /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log for more information On rebooting, I end up with the crappy screen resolution and the thick black border around the screen. I use gksudo software-properties-gtk to bring up sources, where I can change back to the nouveau driver, which restores my screen. After that I can't find /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log so I can't tell you what's inside. Any ideas what might be preventing the nvidia driver from installing? SOLUTION FOUND Okay - so I have a workaround. This is what has worked: Upgrade to kernel 3.7.0 as detailed here upgrade to latest version of the nvidia drivers as detailed here No idea what was happening with kernel 3.5.0-19, but this seems to be better. A little slower maybe on boot, but after days of messing around it's nice to have something that works.

    Read the article

  • Video quality too bad while playing (any) videos in Intel GM965/GL960 Integrated Graphics Controller Ubuntu 12.04

    - by Sukhdev
    I have searched blogs and forums, installed several drivers, but can't find a solution that can provide equivalent video quality as that of Windows 7. Kindly help. Video quality specially color is too bad while playing with any media player. Configuration details are: Ubuntu - 12.04 Intel Corporation Mobile GM965/GL960 Integrated The results of the following commands are a) sudo lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) b) find /dev -group video /dev/fb0 /dev/dri/card0 /dev/dri/controlD64 /dev/agpgart c) glxinfo | grep -i vendor server glx vendor string: SGI client glx vendor string: ATI OpenGL vendor string: Tungsten Graphics, Inc d) sudo lshw -C video *-display:0 description: VGA compatible controller product: Mobile GM965/GL960 Integrated Graphics Controller (primary) vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 0c width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:fea00000-feafffff memory:e0000000-efffffff ioport:efe8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile GM965/GL960 Integrated Graphics Controller (secondary) vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 0c width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:feb00000-febfffff I have spent days installing various drivers, and then un-installing but can't come up with a solution. Please help.

    Read the article

  • ASP.NET Web Forms is bad, or what am I missing?

    - by iveqy
    Being a PHP guy myself I recently had to write a spider to an asp.net site. I was really surprised by the different approach to ajax and form-handling. For example, in the PHP sites I've worked with, a deletion of a database entry would be something like: GET delete.php?id=&confirm=yes and get a "success" back in some form (in the ajax case, probably a json reply). In this asp.net application you would instead post a form, including all inputs on the page, with a huge __VIEWSTATE and __EVENTVALIDATION. This would be more than 10 times as big as above. The reply would be the complete side again, with a footer containing some structured data for javascript to parse and display the result. Again, the whole page is sent, and then throwed away(?) since it's already displayed. Why not just send the footer with the data to parse (it's not json nor xml but a | separated list). I really can't see why you would design a system that way. Usually you've a fast client, and a somewhat fast server but a really slow connection. Why not keep the datatransfer to a minimum? Why those huge __VIEWSTATE and __EVENTVALIDATION? It seems that everything is done way to chatty and way to complicated. I really can't see the point and that usually means that I'm missing something. So please tell me, what are the reasons for this design and what benefits (and weaknesses) does it have? (Yes I know that __VIEWSTATE is used to tell what type of form-konfiguration should be sent back to the server. But WHY is this needed?) Please keep this discussion strictly technical and avoid flamewars. Update: Please excuse the somewhat rantish question. I tried to explain my view to be able to get a better answer. I am not saying that asp.net is bad, I am saying that I don't understand the meaning of those concepts. Usually that means that I've things to learn instead of the concepts beeing wrong. I appreciate the explanations about that "you don't have to do this way in asp.net", I'll read up on MVC and other .net technologies. However, there most be a reason for this site (the one I referred to) to be written the way it is. It's written by professionals for a big organisation with far more experience than what I've. Any explanation about their (possible) design choice would be welcome.

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • Google TV Gets Bad Reception. Can Media Center Pull in the Signal?

    - by andrewbrust
    The news hit Monday morning that Google has decided to delay the release of its Google TV platform, and has asked its OEMs to delay any products that embed the software.  Coming just about two weeks prior to the 2011 Consumer Electronics Show (CES), Google’s timing is about the worst imaginable.  CES is where the platform should have had its coming out party, especially given all the anticipation that has built up since its initial announcement came 7 months ago. At last year’s CES, it seemed every consumer electronics company had fashioned its own software stack for Internet-based video programming and applications/widgets on its TVs, optical disc players and set top boxes.  In one case, I even saw two platforms on a single TV set (one provided by Yahoo and the other one native to the TV set). The whole point of Google TV was to solve this problem and offer a standard, embeddable platform.  But that won’t be happening, at least not for a while.  Google seems unable to get it together, and more proprietary approaches, like Apple TV, don’t seem to be setting the world of TV-Internet convergence on fire, either. It seems to me, that when it comes to building a “TV operating system,” Windows Media Center is still the best of a bad bunch.  But it won’t stay so for much longer without some changes.  Will Redmond pick up the ball that Google has fumbled?  I’m skeptical, but hopeful.  Regardless, here are some steps that could help Microsoft make the most of Google’s faux pas: Introduce a new Media Center version that uses XBox 360, rather than Windows 7 (or 8), as the platform.  TV platforms should be appliance-like, not PC-like.  Combine that notion with the runaway sales numbers for Xbox 360 Kinect, and the mass appeal it has delivered for Xbox, and the switch form Windows makes even more sense. As I have pointed out before, Microsoft’s Xbox implementation of its Mediaroom platform (announced and demoed at last year’s CES) gets Redmond 80% of the way toward this goal.  Nothing stops Microsoft from going the other 20%, other than its own apathy, which I hope has dissipated. Reverse the decision to remove Drive Extender technology from Windows Home Server (WHS), and create deep integration between WHS and Media Center.  I have suggested this previously as well, but the recent announcement that Drive Extender would be dropped from WHS 2.0 creates the need for me to a) join the chorus of people urging Microsoft to reconsider and b) reiterate the importance of Media Center-WHS integration in the context of a Google compete scenario. Enable Windows Phone 7 (WP7) as a Media Center client.  This would tighten the integration loop already established between WP7, Xbox and Zune.  But it would also counter Echostar/DISH Network/Sling Media, strike a blow against Google/Android (and even Apple/iOS) and could be the final strike against TiVO. Bring the WP7 user interface to Media Center and Kinect-enable it.  This would further the integration discussed above and would be appropriate recognition of WP7’s Metro UI having been built on the heritage of the original Media Center itself.  And being able to run your DVR even if you can’t find the remote (or can’t see its buttons in the dark) could be a nifty gimmick. Microsoft can do this but its consumer-oriented organization, responsible for Xbox, Zune and WP7, has to take the reins here, or none of this will likely work.  There’s a significant chance that won’t happen, but I won’t let that stop me from hoping that it does and insisting that it must.  Honestly, this fight is Microsoft’s to lose.

    Read the article

  • If the model is validating the data, shouldn't it throw exceptions on bad input?

    - by Carlos Campderrós
    Reading this SO question it seems that throwing exceptions for validating user input is frowned upon. But who should validate this data? In my applications, all validations are done in the business layer, because only the class itself really knows which values are valid for each one of its properties. If I were to copy the rules for validating a property to the controller, it is possible that the validation rules change and now there are two places where the modification should be made. Is my premise that validation should be done on the business layer wrong? What I do So my code usually ends up like this: <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if (mb_strlen($n) == 0) { throw new ValidationException("Name cannot be empty"); } $this->name = $n; } public function setAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { throw new ValidationException("Age $a is not valid"); } $a = (int)$a; } if ($a < 0 || $a > 150) { throw new ValidationException("Age $a is out of bounds"); } $this->age = $a; } // other getters, setters and methods } In the controller, I just pass the input data to the model, and catch thrown exceptions to show the error(s) to the user: <?php $person = new Person(); $errors = array(); // global try for all exceptions other than ValidationException try { // validation and process (if everything ok) try { $person->setAge($_POST['age']); } catch (ValidationException $e) { $errors['age'] = $e->getMessage(); } try { $person->setName($_POST['name']); } catch (ValidationException $e) { $errors['name'] = $e->getMessage(); } ... } catch (Exception $e) { // log the error, send 500 internal server error to the client // and finish the request } if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } Is this a bad methodology? Alternate method Should maybe I create methods for isValidAge($a) that return true/false and then call them from the controller? <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if ($this->isValidName($n)) { $this->name = $n; } else { throw new Exception("Invalid name"); } } public function setAge($a) { if ($this->isValidAge($a)) { $this->age = $a; } else { throw new Exception("Invalid age"); } } public function isValidName($n) { $n = trim($n); if (mb_strlen($n) == 0) { return false; } return true; } public function isValidAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { return false; } $a = (int)$a; } if ($a < 0 || $a > 150) { return false; } return true; } // other getters, setters and methods } And the controller will be basically the same, just instead of try/catch there are now if/else: <?php $person = new Person(); $errors = array(); if ($person->isValidAge($age)) { $person->setAge($age); } catch (Exception $e) { $errors['age'] = "Invalid age"; } if ($person->isValidName($name)) { $person->setName($name); } catch (Exception $e) { $errors['name'] = "Invalid name"; } ... if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } So, what should I do? I'm pretty happy with my original method, and my colleagues to whom I have showed it in general have liked it. Despite this, should I change to the alternate method? Or am I doing this terribly wrong and I should look for another way?

    Read the article

  • What's the fastest way to check if a word from one string is in another string?

    - by Mike Trpcic
    I have a string of words; let's call them bad: bad = "foo bar baz" I can keep this string as a whitespace separated string, or as a list: bad = bad.split(" "); If I have another string, like so: str = "This is my first foo string" What's the fasted way to check if any word from the bad string is within my comparison string, and what's the fastest way to remove said word if it's found? #Find if a word is there bad.split(" ").each do |word| found = str.include?(word) end #Remove the word bad.split(" ").each do |word| str.gsub!(/#{word}/, "") end

    Read the article

  • Why is it a bad practice to call System.gc?

    - by zneak
    After answering to a question about how to force-free objects in Java (the guy was clearing a 1.5GB HashMap) with System.gc(), I've been told it's a bad practice to call System.gc() manually, but the comments seemed mitigated about it. So much that no one dared to upvote it, nor downvote it. I've been told there it's a bad practice, but then I've also been told garbage collector runs don't systematically stop the world anymore, and that it could also be only seen as a hint, so I'm kind of at loss. I do understand that usually the JVM knows better than you when it needs to reclaim memory. I also understand that worrying about a few kilobytes of data is silly. And I also understand that even megabytes of data isn't what it was a few years back. But still, 1.5 gigabyte? And you know there's like 1.5 GB of data hanging around in memory; it's not like it's a shot in the dark. Is System.gc() systematically bad, or is there some point at which it becomes okay? So the question is actually double: Why is it or not a bad practice to call System.gc()? Is it really a hint under certain implementations, or is it always a full collection cycle? Are there really garbage collector implementations that can do their work without stopping the world? Please shed some light over the various assertions people have made. Where's the threshold? Is it never a good idea to call System.gc(), or are there times when it's acceptable? If any, what are those times?

    Read the article

  • are projects with high developer turn over rate really a bad thing?

    - by John
    I've inherited a lot of web projects that experienced high developer turn over rates. Sometimes these web projects are a horrible patchwork of band aid solutions. Other times they can be somewhat maintainable mozaics of half-done features each built with a different architectural style. Everytime I inherit these projects, I wish the previous developers could explain to me why things got so bad. What puzzles me is the reaction of the owners (either a manager, a middle man company, or a client). They seem to think, "Well, if you leave, I'll just find another developer." Or they think, "Oh, it costs that much money to refactor the system? I know another developer who can do it at half the price. I'll hire him if I can't afford you." I'm guessing that the high developer turn over rate is related to the owner's mentality of "If you think it's a bad idea to build this, I'll just find another (possibly cheaper) developer to do what I want". For the owners, the approach seems to work because their business is thriving. Unfortunately, it's no fun for the developers that go AWOL 3-4 months after working with poor code, strict timelines, and little feedback. So my question is the following: Are the following symptoms of a project really such a bad thing for business? high developer turn over rate poorly built technology - often a patchwork of different and inappropriately used architectural styles owners without a clear roadmap for their web project, and they request features on a whim I've seen numerous businesses prosper while experiencing the symptoms above. So as a programmer, even though my instincts tell me the above points are terrible, I'm forced to take a step back and ask, "are things really that bad in the grand scheme of things?" If not, I will re-evaluate my approach to these projects.

    Read the article

  • DBan not working because disk has bad sectors? [migrated]

    - by canadiancreed
    Attempting to wipe the drive of a laptop that I have before it's sold, and normally use DBAN to do so. However this time it starts and then finishes instantly with the following message. "DBAN finished with non-fatal errors This is usually cause by disks with bad sectors" Have tried multiple flags such as noverify to force it to skip this check (it doesn't show bad sectors in the OS scan in windows). but the error always comes back. This is the only time that I've seen this message, as every other of the few drives I've used this software on usually take 3-5 hours to do their job.

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • Find out what resource is triggering bad password attempt?

    - by Craig Tataryn
    Background: Have a problem at work where I am constantly being locked out of my computer. We are in an environment that has a Domain Controller and we use Active Directory for authentication. By going through my normal workflow while on the phone with Desktop Support we were able to track the bad password attempts that were causing the lockouts to an application: "Eclipse". This is the application I use to do software development. I immediately thought it was a cached password for our SVN server that's the culprit, however the desktop support person couldn't tell me which resource the password attempt was being made against (i.e. which URL for instance). Question: Is there a way that I can monitor bad authentication requests made by an application on my desktop and find out what resource they are attempting it against?

    Read the article

  • DH61AG's mythical 2 pin 19v power socket and is too low of votage bad?

    - by Nick Orton
    I have an intel dh61ag motherboard. It has an external 19v power adapter. It also has a 1x2 pin 19VDC internal power connector. Now I cannot find a psu or adapter or anything that will plug into this. In an intel forum, one person said that he plugged half of a 2x2 psu connector in and it worked. Since this would deliver 12v into a socket that asks for 19v, I suspect that this is a bad idea. I don't know much about hardware. Can anyone explain to me why this would be a bad idea?

    Read the article

  • Is it bad to put your computer in sleep mode every time?

    - by Ivo Flipse
    Often I have a lot of stuff open and don't feel like shutting down my laptop, so I just use sleep mode when I'm transferring it. But I have no idea if this might have any disadvantages. So my question is: is it bad to put your computer in sleep mode every time? Things I'm wondering: Should I turn off my computer every once in a while? Will continuous use of sleep mode slow down my system in any way? Are there any bad side effects (in the long term)? Any thoughts? FYI I'm using Windows 7 on a laptop

    Read the article

  • recordMyDesktop stopped working after upgrade

    - by anfeo
    Hi, I've done the upgrade to Ubuntu 10.10 from Ubuntu 10.04, and recordMydesktop don't work now. If I start it from command line it seam to work, but the interface don't start and I have this error: Initial recording window is set to: X:0 Y:0 Width:1680 Height:945 Adjusted recording window is set to: X:0 Y:0 Width:1680 Height:944 Your window manager appears to be Metacity Initializing... Buffer size adjusted to 4096 from 4096 frames. Opened PCM device default Recording on device default is set to: 1 channels at 22050Hz X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. Capturing! X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned.

    Read the article

  • How can I get Perl to detect the bad UTF-8 sequences?

    - by gorilla
    I'm running Perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class. These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5 I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag and ignore the bad titles. if(not utf8::valid($title)){ $title="Invalid UTF-8"; } $data->title($title); $data->update(); However Perl seems to think that the strings are valid, but it still throws the exceptions. How can I get Perl to detect the bad UTF-8?

    Read the article

  • How do I troubleshoot a "Bad Request" in Apache2?

    - by Nick
    I have a PHP application that loads for all URLs except the home page. Visiting "https://my.site.com/" produces a "Bad Request" error message. Any other URL, for example, "https://my.site.com/SomePage/" works just fine. It's only the home page that does not work. All pages use mod_rewrite and get routed through a single dispatch script, Director.php. Accessing Director.php directly also produces the "Bad Request" error. BUT- ALL of the other requests go through Director, and they all work just fine, (excluding the home page), so it can't be an issue with the Director.php script? OR can it? I'm not seeing anything in the Apache2 error log, and I'm not seeing any PHP errors in the PHP Error log. I've tried changing the first line of Director.php to read: echo 'test'; exit(); But I still get a "Bad Request". This is the rewrite log for a request to the home page: 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (2) init rewrite engine with requested uri / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (1) pass through / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) init rewrite engine with requested uri /Director.php 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) rewrite '/Director.php' - '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) local path result: -[L,NC] Apache2 Access Log my.site.com:443 123.123.123.123 - - [18/Feb/2011:05:44:19 +0000] "GET / HTTP/1.1" 400 3223 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8" Any ideas? I don't know what else to try? UPDATE: Here's my vhost conf: RewriteEngine On RewriteLog "/LiveWebs/mysite.com/rewrite.log" RewriteLogLevel 5 # Dont rewite Crons folder ReWriteRule ^/Crons/ - [L,NC] ReWriteRule ^/phpmyadmin - [L,NC] ReWriteRule .php$ -[L,NC] # this is the problem!! RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] The problem is the line "ReWriteRule .php$ -[L,NC]". When I comment it out, the home page loads. The question is, how do I make URLS that actually end in .php go straight through (without breaking the home page)?

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace

    - by Nick
    I have an laptop top hard drive I was trying to use to my new media computer. The case is small and can accommodate for 2 2.5" drives, no 3.5" drives. I had been using the hard drive as storage hard drive until now. When I go to install Windows on the hard drive first I'm prompted at the bios of: Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace. And then again in the Windows Setup, informing me that the hard drive is bad. So I did a full format of the drive and tried again. Same error. So I took it out and hooked it back up to my other computer via an Sata usb adapter kit (maybe the cause?). The hard drive is recognized fine and when I scanned it for errors by going: right click -> properties -> tools -> error checking It returns that the hard drive is fine. I have tried 3 different SATA cables and multiple jumpers. When I plugged in my 1.5 tb 3.5" drive the computer that gives me the S.M.A.R.T. error on the 2.5" drive, recognizes it with no problems. Any ideas on why this is happening and how I can fix it?

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video - any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • Bad method names and what it says about code structure.

    - by maxfridbe
    (Apologies in advance if this is a re-post but I didn't find similar posts) What bad method name patterns have you seen in code and what did it tell you about the code. For instance, I keep seeing: public void preform___X___IfNecessary(...); I believe that this is bad because the operation X has an inversion of conditions. Note that this is a public method because classes methods might legitimately require private helpers like this

    Read the article

  • When using Sessions is bad thing, and whats wrong with it?

    - by Amr ElGarhy
    I know that in community server which means that you can't use Sessions, and few years ago i remember i was working on a website where we were not allowed to use sessions. In my point of view sessions are a very helpful tool if we managed how to use the right way, but is using session variable in a website is something bad, when its bad and when its not?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >