Search Results

Search found 10078 results on 404 pages for 'bad man'.

Page 37/404 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • DBan not working because disk has bad sectors? [migrated]

    - by canadiancreed
    Attempting to wipe the drive of a laptop that I have before it's sold, and normally use DBAN to do so. However this time it starts and then finishes instantly with the following message. "DBAN finished with non-fatal errors This is usually cause by disks with bad sectors" Have tried multiple flags such as noverify to force it to skip this check (it doesn't show bad sectors in the OS scan in windows). but the error always comes back. This is the only time that I've seen this message, as every other of the few drives I've used this software on usually take 3-5 hours to do their job.

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • Find out what resource is triggering bad password attempt?

    - by Craig Tataryn
    Background: Have a problem at work where I am constantly being locked out of my computer. We are in an environment that has a Domain Controller and we use Active Directory for authentication. By going through my normal workflow while on the phone with Desktop Support we were able to track the bad password attempts that were causing the lockouts to an application: "Eclipse". This is the application I use to do software development. I immediately thought it was a cached password for our SVN server that's the culprit, however the desktop support person couldn't tell me which resource the password attempt was being made against (i.e. which URL for instance). Question: Is there a way that I can monitor bad authentication requests made by an application on my desktop and find out what resource they are attempting it against?

    Read the article

  • DH61AG's mythical 2 pin 19v power socket and is too low of votage bad?

    - by Nick Orton
    I have an intel dh61ag motherboard. It has an external 19v power adapter. It also has a 1x2 pin 19VDC internal power connector. Now I cannot find a psu or adapter or anything that will plug into this. In an intel forum, one person said that he plugged half of a 2x2 psu connector in and it worked. Since this would deliver 12v into a socket that asks for 19v, I suspect that this is a bad idea. I don't know much about hardware. Can anyone explain to me why this would be a bad idea?

    Read the article

  • Is it bad to put your computer in sleep mode every time?

    - by Ivo Flipse
    Often I have a lot of stuff open and don't feel like shutting down my laptop, so I just use sleep mode when I'm transferring it. But I have no idea if this might have any disadvantages. So my question is: is it bad to put your computer in sleep mode every time? Things I'm wondering: Should I turn off my computer every once in a while? Will continuous use of sleep mode slow down my system in any way? Are there any bad side effects (in the long term)? Any thoughts? FYI I'm using Windows 7 on a laptop

    Read the article

  • recordMyDesktop stopped working after upgrade

    - by anfeo
    Hi, I've done the upgrade to Ubuntu 10.10 from Ubuntu 10.04, and recordMydesktop don't work now. If I start it from command line it seam to work, but the interface don't start and I have this error: Initial recording window is set to: X:0 Y:0 Width:1680 Height:945 Adjusted recording window is set to: X:0 Y:0 Width:1680 Height:944 Your window manager appears to be Metacity Initializing... Buffer size adjusted to 4096 from 4096 frames. Opened PCM device default Recording on device default is set to: 1 channels at 22050Hz X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. Capturing! X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned. X Error: BadAccess (attempt to access private resource denied) Bad Access on XGrabKey. Shortcut already assigned.

    Read the article

  • Cannot delete apt-fast for a clean install

    - by colby
    This is my problem: $ destroy apt-fast [sudo] password for colbyryptos: Reading package lists... Done Building dependency tree Reading state information... Done Package apt-fast is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried sudo rm /var/lib/dpkg/lock, followed by sudo dpkg --configure -a. It then gives me this $ sudo dpkg --configure -a [sudo] password for colbyryptos: Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db

    Read the article

  • How can I get Perl to detect the bad UTF-8 sequences?

    - by gorilla
    I'm running Perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class. These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5 I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag and ignore the bad titles. if(not utf8::valid($title)){ $title="Invalid UTF-8"; } $data->title($title); $data->update(); However Perl seems to think that the strings are valid, but it still throws the exceptions. How can I get Perl to detect the bad UTF-8?

    Read the article

  • How do I troubleshoot a "Bad Request" in Apache2?

    - by Nick
    I have a PHP application that loads for all URLs except the home page. Visiting "https://my.site.com/" produces a "Bad Request" error message. Any other URL, for example, "https://my.site.com/SomePage/" works just fine. It's only the home page that does not work. All pages use mod_rewrite and get routed through a single dispatch script, Director.php. Accessing Director.php directly also produces the "Bad Request" error. BUT- ALL of the other requests go through Director, and they all work just fine, (excluding the home page), so it can't be an issue with the Director.php script? OR can it? I'm not seeing anything in the Apache2 error log, and I'm not seeing any PHP errors in the PHP Error log. I've tried changing the first line of Director.php to read: echo 'test'; exit(); But I still get a "Bad Request". This is the rewrite log for a request to the home page: 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (2) init rewrite engine with requested uri / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (1) pass through / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) init rewrite engine with requested uri /Director.php 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) rewrite '/Director.php' - '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) local path result: -[L,NC] Apache2 Access Log my.site.com:443 123.123.123.123 - - [18/Feb/2011:05:44:19 +0000] "GET / HTTP/1.1" 400 3223 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8" Any ideas? I don't know what else to try? UPDATE: Here's my vhost conf: RewriteEngine On RewriteLog "/LiveWebs/mysite.com/rewrite.log" RewriteLogLevel 5 # Dont rewite Crons folder ReWriteRule ^/Crons/ - [L,NC] ReWriteRule ^/phpmyadmin - [L,NC] ReWriteRule .php$ -[L,NC] # this is the problem!! RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] The problem is the line "ReWriteRule .php$ -[L,NC]". When I comment it out, the home page loads. The question is, how do I make URLS that actually end in .php go straight through (without breaking the home page)?

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace

    - by Nick
    I have an laptop top hard drive I was trying to use to my new media computer. The case is small and can accommodate for 2 2.5" drives, no 3.5" drives. I had been using the hard drive as storage hard drive until now. When I go to install Windows on the hard drive first I'm prompted at the bios of: Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace. And then again in the Windows Setup, informing me that the hard drive is bad. So I did a full format of the drive and tried again. Same error. So I took it out and hooked it back up to my other computer via an Sata usb adapter kit (maybe the cause?). The hard drive is recognized fine and when I scanned it for errors by going: right click -> properties -> tools -> error checking It returns that the hard drive is fine. I have tried 3 different SATA cables and multiple jumpers. When I plugged in my 1.5 tb 3.5" drive the computer that gives me the S.M.A.R.T. error on the 2.5" drive, recognizes it with no problems. Any ideas on why this is happening and how I can fix it?

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • How should a one-man development shop document their code?

    - by CKoenig
    Hi, please let me first describe my situation. I work in an IT department for a small-to-medium sized industrial-company and basically I'm the only real developer (sometimes a second guy joins in for his own projects). I programm mostly in C#/.net. Of course I only programm for internal need (Intranet, reporting, data-driven apps, some mobile apps, ...). My question is how should I document my work? It's a highly dynamic environment (the features and bug fixes I implement are tested by me during production, and go live, often within a day. If I technical documentation like MSDN or even overview diagramms those would take me more time to sync than the whole programming process. Also I feel it's a waste of time because I would be the only one who ever read it. I do understand that if I get sick, leave, or forget this documentation would be valuable. PS:well of course you are right - the quesion is how much and how/where. I try using the XML-docu comments for the public exposed parts but as I'm a believer in self-documenting code the comments mostly restates in plain text what you can read from the method-head itself :(Maybe using the remarks section is the key but if you have 30 lines of code with a 15 line xml-comment in front it just looks dirty (sorry for posting it here but our firewall rejects JSON :( )

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video - any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • Bad method names and what it says about code structure.

    - by maxfridbe
    (Apologies in advance if this is a re-post but I didn't find similar posts) What bad method name patterns have you seen in code and what did it tell you about the code. For instance, I keep seeing: public void preform___X___IfNecessary(...); I believe that this is bad because the operation X has an inversion of conditions. Note that this is a public method because classes methods might legitimately require private helpers like this

    Read the article

  • RPC Fails but passing in SQL Man. Studios works

    - by Justin
    I am calling a stored procedure from a web service in an ASP.Net application. And until a few days ago, all was well. However now when i call it I get an error saying The timeout period elapsed prior to the completion of the operation or the server could not be reached. However when i would run SQL SERVER PROFILER, I could see that the call was getting to the database, but was timing out. I then copied the statement being executed found at the bottom of the Profiler and pasted it into Management Studio and executed it and it finishes in about 7 seconds.. This runs just fine on our production server. It seems to be similar to this question: SELECT DISTINCT not working in .NET application, but works in SQL Mgmt Studio but I see no answer.

    Read the article

  • When using Sessions is bad thing, and whats wrong with it?

    - by Amr ElGarhy
    I know that in community server which means that you can't use Sessions, and few years ago i remember i was working on a website where we were not allowed to use sessions. In my point of view sessions are a very helpful tool if we managed how to use the right way, but is using session variable in a website is something bad, when its bad and when its not?

    Read the article

  • Server-environment and configuration: How bad is fread() etc?

    - by zero
    Hello dear commmunity, good day! I run a little site (now for several months ) that has users accessing big files, for download as well for streaming to the browser. It's fairly active, so assuming the worst, how bad is getting php to read the files that would be stored outside of the webroot and then getting it to echo it to a page dynamically for the browser to then read? My question is: how bad is fread() etc in this context!? zero

    Read the article

  • Multiple classes in a single .cs file - good or bad?

    - by Sergio
    Is it advisable to create multiple classes within a .cs file or should each .cs file have an individual class? For example: public class Items { public class Animal { } public class Person { } public class Object { } } Dodging the fact for a minute that this is a poor example of good architecture, is having more than a single class in a .cs file a code smell?

    Read the article

  • LINQ und ArcObjects

    - by Marko Apfel
    LINQ und ArcObjects Motivation LINQ1 (language integrated query) ist eine Komponente des Microsoft .NET Frameworks seit der Version 3.5. Es erlaubt eine SQL-ähnliche Abfrage zu verschiedenen Datenquellen wie SQL, XML u.v.m. Wie SQL auch, bietet LINQ dazu eine deklarative Notation der Problemlösung - d.h. man muss nicht im Detail beschreiben wie eine Aufgabe, sondern was überhaupt zu lösen ist. Das befreit den Entwickler abfrageseitig von fehleranfälligen Iterator-Konstrukten. Ideal wäre es natürlich auf diese Möglichkeiten auch in der ArcObjects-Programmierung mit Features zugreifen zu können. Denkbar wäre dann folgendes Konstrukt: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; bzw. dessen Äquivalent als Lambda-Expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); Dazu muss ein entsprechender Provider zu Verfügung stehen, der die entsprechende Iterator-Logik managt. Dies ist leichter als man auf den ersten Blick denkt - man muss nur die gewünschten Entitäten als IEnumerable<IFeature> liefern. (Anm.: nicht wundern - die Methoden GetValue() und ToDouble() habe ich nebenbei als Erweiterungsmethoden deklariert.) Im Hintergrund baut LINQ selbständig eine Zustandsmaschine (state machine)2 auf deren Ausführung verzögert ist (deferred execution)3 - d.h. dass erst beim tatsächlichen Anfordern von Entitäten (foreach, Count(), ToList(), ..) eine Instanziierung und Verarbeitung stattfindet, obwohl die Zuweisung schon an ganz anderer Stelle erfolgte. Insbesondere bei mehrfacher Iteration durch die Entitäten reibt man sich bei den ersten Debuggings verwundert die Augen wenn der Ausführungszeiger wie von Geisterhand wieder in die Iterator-Logik springt. Realisierung Eine ganz knappe Logik zum Konstruieren von IEnumerable<IFeature> lässt sich mittels Durchlaufen eines IFeatureCursor realisieren. Dazu werden die einzelnen Feature mit yield ausgegeben. Der einfachen Verwendung wegen, habe ich die Logik in eine Erweiterungsmethode GetFeatures() für IFeatureClass aufgenommen: public static IEnumerable GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } Damit kann man sich nun ganz einfach die IEnumerable<IFeature> erzeugen lassen: IEnumerable features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); Etwas aufpassen muss man bei der Verwendung des "Recycling-Cursors". Nach einer verzögerten Ausführung darf im selben Kontext nicht erneut über die Features iteriert werden. In diesem Fall wird nämlich nur noch der Inhalt des letzten (recycelten) Features geliefert und alle Features sind innerhalb der Menge gleich. Kritisch würde daher das Konstrukt largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); weil ToList() schon einmal durch die Liste iteriert und der Cursor somit einmal durch die Features bewegt wurde. Die Erweiterungsmethode ForEach liefert dann immer dasselbe Feature. In derartigen Situationen darf also kein Cursor mit Recycling verwendet werden. Ein mehrfaches Ausführen von foreach ist hingegen kein Problem weil dafür jedes Mal die Zustandsmaschine neu instanziiert wird und somit der Cursor neu durchlaufen wird – das ist die oben schon erwähnte Magie. Ausblick Nun kann man auch einen Schritt weiter gehen und ganz eigene Implementierungen für die Schnittstelle IEnumerable<IFeature> in Angriff nehmen. Dazu müssen nur die Methode und das Property zum Zugriff auf den Enumerator ausprogrammiert werden. Im Enumerator selbst veranlasst man in der Reset()-Methode das erneute Ausführen der Suche – dazu übergibt man beispielsweise ein entsprechendes Delegate in den Konstruktur: new FeatureEnumerator( _featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); und ruft dieses beim Reset auf: public void Reset() {     _featureCursor = _resetCursor(_t); } Auf diese Art und Weise können Enumeratoren für völlig verschiedene Szenarien implementiert werden, die clientseitig restlos identisch nach obigen Schema verwendet werden. Damit verschmelzen Cursors, SelectionSets u.s.w. zu einer einzigen Materie und die Wiederverwendbarkeit von Code steigt immens. Obendrein lässt sich ein IEnumerable in automatisierten Unit-Tests sehr einfach mocken - ein großer Schritt in Richtung höherer Software-Qualität.4 Fazit Nichtsdestotrotz ist Vorsicht mit diesen Konstrukten in performance-relevante Abfragen geboten. Dadurch dass im Hintergrund eine Zustandsmaschine verwalten wird, entsteht einiges an Overhead dessen Verarbeitung zusätzliche Zeit kostet - ca. 20 bis 100 Prozent. Darüber hinaus ist auch das Arbeiten ohne Recycling schnell ein Performance-Gap. Allerdings ist deklarativer LINQ-Code viel eleganter, fehlerfreier und wartungsfreundlicher als das manuelle Iterieren, Vergleichen und Aufbauen einer Ergebnisliste. Der Code-Umfang verringert sich erfahrungsgemäß im Schnitt um 75 bis 90 Prozent! Dafür warte ich gerne ein paar Millisekunden länger. Wie so oft muss abgewogen werden zwischen Wartbarkeit und Performance - wobei für mich Wartbarkeit zunehmend an Priorität gewinnt. Zumeist ist sowieso nicht der Code sondern der Anwender die Bremse im Prozess. Demo-Quellcode support.esri.de   [1] Wikipedia: LINQ http://de.wikipedia.org/wiki/LINQ [2] Wikipedia: Zustandsmaschine http://de.wikipedia.org/wiki/Endlicher_Automat [3] Charlie Calverts Blog: LINQ and Deferred Execution http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx [4] Clean Code Developer - gelber Grad/Automatisierte Unit Tests http://www.clean-code-developer.de/Gelber-Grad.ashx#Automatisierte_Unit_Tests_8

    Read the article

  • Is it bad to join open-source projects as an amateur?

    - by esqew
    I've thought for about six months now that I should join an open-source iPhone or iPad project to hone my skills in Objective-C, but every time I go to do it I see thousands of lines of code on huge projects that I end up convincing myself I would never understand. I always think that my commits would just end up being a hassle for project admins and more senior contributors, so I always back out at the last second. My question essentially is, is it a hassle when an intermediately-experienced programmer joins an open-source project?

    Read the article

  • is it a bad practice to call a View from another View in MVC?

    - by marcos.borunda
    I have some plain Views, they don't have any logic behind them (there is no action or controller behind them), their only propouse is to alert the user about something like "We have sent you an email to confirm your account", "You have no access to this resource", etc... These views are really simple, and calling them through a Controller/Action seems to be too much overhead, but somehow I feel like it is not quite correct. What do you think? How do you handle this kind of situations?? I guess this question will apply to any MVC Framework, but in my case I'm using the ASP.NET MVC 3 framework.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >