Search Results

Search found 22914 results on 917 pages for 'solid state drive'.

Page 55/917 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Ubuntu Server 13.10 can't mount hard drive that is on my router

    - by Keytachi626
    So I am working currently with my Ubuntu server which I have it on my laptop at the moment so I can test out how to work with the server OS. I have it up and running with samba, openSSH, webmin, and plexmedia server. My problem is that I can't seem to get the server to get to the router hard drive. I have a TP-link wdr3500. The format of the hard drive is a FAT32. What I've tried: install cifs. sudo vi /etc/fstab Type out \\ \tplinklogin.net\volume1 \mnt\media cifs guest 0 0 I have also tried out \\\192.168.0.1\volume1 \mnt\media cifs guest 0 0 But then when I go to terminal and do sudo mount -a, I usually get a error saying wrong fs type, bad option, bad superblock on //ipaddress/dns/volume1 , missing codepage or helper program, or other error. But in dmesg it will say unable to determine destination address. So am I doing something wrong here? I can't install the hard drive on to my laptop since my family is constantly using it to transfer data back and forth on it and they get mad at me if I just take it away.

    Read the article

  • Install Lubuntu w/o Disc or Flash Drive?

    - by Seib
    My WinXP desktop was recently blasted with a few bits of malware. I removed the malware but the damage is done, and my OS is finally at its end; it's barely chugging along as it is. So, I'm figuring what I'd like to do is replace the OS with Lubuntu, which I know is a lightweight, heavily WinXP-esque distro of Linux. However, every article, every explanation, I've read online has stated I need either a 4+ GiB flash drive or a burnable CD. Well, I have a flash drive, but it's only 1 GiB; and I have a disc burner, but no disc. So, frankly, I doubt I'll be able to do it. But hey, I figured I might as well ask away here on AU; what have I got to lose? So my question is this: would it be possible to replace my WinXP OS w/ Lubuntu without using either a 4+ GiB flash drive or a burnable CD? Thanks very much. I appreciate any help you could give.

    Read the article

  • Securely automount encrypted drive at user login

    - by Tom Brossman
    An encrypted /home directory gets mounted automatically for me when I log in. I have a second internal hard drive that I've formatted and encrypted with Disk Utility. I want it to be automatically mounted when I login, just like my encrypted /home directory is. How do I do this? There are several very similar questions here, but the answers don't apply to my situation. It might be best to close/merge my question here and edit the second one below, but I think it may have been abandoned (and therefore never to be marked as accepted). This solution isn't a secure method, it circumvents the encryption. This one requires editing fstab, which necessitates entering an additional password at boot. It's not automatic like mounting /home. This question is very similar, but does not apply to an encrypted drive. The solution won't work for my needs. Here is one but it's for NTFS drives, mine is ext4. I can re-format and re-encrypt the second drive if a solution requires this. I've got all the data backed up elsewhere.

    Read the article

  • Creating floppy drive special devices under Quantal

    - by JCCyC
    First, I'd like for the various special devices for different floppy capacities (like /dev/fd0u720 etc.) to be available. I tried to adapt some udev rules I found online. I tried this, which I saved as /etc/udev/rules.d/70-persistent-floppy.rules: # change floppy device ownership and permissions # default permissions are 640, which prevents group users from having write access # first fix primary devices (/dev/fd0, /dev/fd1, etc.) # also change group ownership from disk to floppy SUBSYSTEM=="block", KERNEL=="fd[0-9]*", GROUP="floppy", MODE="0660" # next recreate secondary devices (/dev/fd0u720, /dev/fd0u1440, etc.) SUBSYSTEM=="block", KERNEL=="fd[0-9]*", ACTION=="add", RUN+="create_floppy_devices -c -t $attr{cmos} -m %M -M 0660 -G floppy $root/%k" But to no avail. It seems the create_floppy_devices script isn't provided with 12.10. How do I obtain it? Second: I'm using MATE, and whenever I log in I get a message box saying it tried to mount the drive but failed. How do I disable this? Third (which is probably related to the second): Whenever there's a disk in the drive, the motor won't stop spinning. If I do a mdir of it, after it returns, the motor stops, and then starts again. I suspect there's some process in MATE doing this. UPDATE: For CentOS 6 (who does have a create_floppy_devices program) the following rules file worked. Saved as /etc/udev/rules.d/98-floppy.rules: # change floppy device ownership and permissions # default permissions are 640, which prevents group users from having write access # first fix primary devices (/dev/fd0, /dev/fd1, etc.) # also change group ownership from disk to floppy KERNEL=="fd[0-9]*", GROUP="floppy", MODE="0660" # next recreate secondary devices (/dev/fd0u720, /dev/fd0u1440, etc.) # drive A: is type 4 (1.44MB) - add other lines for other drives KERNEL=="fd0*", ACTION=="add", RUN+="/lib/udev/create_floppy_devices -c -t 4 -m %M -M 0660 -G floppy $root/%k"

    Read the article

  • Still can't mount windows 8 drive after restart

    - by Ishai Bloch
    After following the instructions in other posts, I am still getting the same error when I try to mount my Windows 8 drive in Ubuntu 14.04 on a dual boot system. I have disabled fast start after shutdown, hybrid hibernation, and the preinstalled Asus Instant On service. I have tried restarting Windows rather than shutting down. In all cases I get the same error message, namely: Error mounting /dev/sda4 at /media/jesse/OS: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda4" "/media/jesse/OS"' exited with non-zero exit status 14: Windows is hibernated, refused to mount. Failed to mount '/dev/sda4': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option. I did not have the same issue before upgrading from Ubuntu 12 to 14. For what it's worth my computer is supposedly a "hybrid" with an SSD drive installed, although I can't see that the SSD drive is being used at all with my present settings. Any thoughts?

    Read the article

  • Automatic syncing USB HD drive to local HD (different computers)

    - by luri
    I have a desktop PC at work, and a laptop at home.... or elsewhere. What I want to do is use a USB HD to store my documents (about 130 Gb, maybe more). That would serve as backup and also port my files to my laptop. I'd like any of both computers to automatically sync all files there with local copies, so that I can work at either of them and keep updated copies of everything in both (plus the USB drive, which would allow me to work in other computers, too, apart from being another backup). Dropbox ins't a solution for me, due to pricing and 100Gb limit. The workflow would be as follows, to clear things up: I work on PC1. Changes in files are automatically synced to USB whenever a file is modified. I go home and boot PC2. I plug the usb drive and local files are synced (if changed) with the most recent usb copies. While I work at PC2, again, changes in files spread to my USB drive. Whenever I go to PC1 again, I plug my usb and again everything synces. So the questions would be: a) Am I crazy? b) Can it be done? c) Will I have any file conflicts (provided I'm the only one that will modify the files)?

    Read the article

  • Drive reporting incorrect free space

    - by Oli
    So I swapped my shiny SATA SSD for an even shinier PCI-E SSD. I run my core OS on the SSD because it's silly-fast. I did this on my old SSD so I created a new EXT4 partition and then just dded the data across (sorry I don't know the exact command I ran anymore) and after reinstalling grub, I booted onto the PCI-E SSD. At first glance everything had worked perfectly and things were running faster than ever. But then I noticed the free disk space on the new, larger drive: it was almost exactly the same as it was on the other disk... A disk that was half its size. So it looks as if I've copied the files across incorrectly and it's copied some of the filesystem metadata along with it. Tools like du and Disk Usage Analyzer come back with the correct figures. Things that look at the partition (and not the files) seem to think the drive is 120GB I've been using this drive for a week now so it's way out of sync with the old SSD so dumping the data and starting again isn't a job that fills me with joy but two questions: Is there a way to fix my filesystem so it knows what it's really on about? fsck e2fsck and badblocks all seem to be able to scan it without finding a problem with it. If I do plug my old SSD back in, copy the data off my PCI-E on to it and then copy it back onto a fresh filesystem (eg juggle the data around), what's the best way of doing that? I obviously want to keep all the permissions and softlinks where they are.

    Read the article

  • My new hard drive won't automount on boot

    - by user518
    I installed a new hard drive right before installing the new Ubuntu 11.10 by reformatting, not upgrading. I was able to mount my drive, and partition it. It's a 1TB, and I was able to transfer all of my music, and videos to it. For some reason, it won't mount on boot, and I can't figure out how to manually mount it afterwards either. Here's my current /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=e0fbdf09-f9a0-4336-bac3-ba4dc6cfbcc0 / ext4 errors=remount-ro,user_xattr 0 1 # swap was on /dev/sda5 during installation UUID=adf15180-c84c-4309-bc9f-085fd7464f89 none swap sw 0 0 /dev/sdc1 /media/sdc1 ext4 defaults 0 0 The last line is what I added for my hard drive. Here's the output from sudo lshw -C disk: % sudo lshw -C disk ~ *-disk:0 description: ATA Disk product: ST3250310AS vendor: Seagate physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 3.AD serial: 6RYBF2QE size: 232GiB (250GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000da204 *-cdrom description: DVD-RAM writer product: DVD+-RW DH-16A6S vendor: PLDS physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd0 logical name: /dev/sr0 version: YD11 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc

    Read the article

  • Compress session state in ASP.Net 4.0

    - by nikolaosk
    Hello folks, In this post I would like to talk about a new feature of ASP.NET 4.0 - easy state compression . When we create web-asp.net applications the user must feel that whenever he interacts with the website, he actually interacts with something that can be safely described as an application. What I mean by this is that is that during a postback the whole page is re-created and is sent back to the client in a fraction of a second. The server has no idea what the user does with the page. If we...(read more)

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • CIO Magazine's State of the CIO and its Impact on Your EA

    - by david.olivencia(at)oracle.com
    CIO Magazine today released its State of the CIO report.  As most Enterpise Architects report to (or report very close to) the CIO, the report provides interesting insights as to where most CIOs minds and priorities are.  The information will allow Enterprise Architects  to better align plans, approaches, models, and stratagies. The report's summary can be found here:  http://assets.cio.com/documents/cache/pdfs/2011/dec15_gatefold.pdf   Specifically the article highlights: * How IT Makes A Difference * Critical Leadership Skills * Business Focused CIOs * Areas of Increasing Responsibility * Plans for 2015   Enterprise Architects what insights from this report will alter they way you successfully lead in 2011?   David Olivencia | Solution Director, Enterprise Architecture & Exa ServicesOracle Consulting Latin America and Caribbean

    Read the article

  • How to structure game states in an entity/component-based system

    - by Eva
    I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class: public class RenderingSystem extends GameSystem { private GameView gameView_; /** * Constructor * Creates a new RenderingSystem. * @param gameManager The game manager. Used to get the game components. */ public RenderingSystem(GameManager gameManager) { super(gameManager); } /** * Method: registerGameView * Registers gameView into the RenderingSystem. * @param gameView The game view registered. */ public void registerGameView(GameView gameView) { gameView_ = gameView; } /** * Method: triggerRender * Adds a repaint call to the event queue for the dirty rectangle. */ public void triggerRender() { Rectangle dirtyRect = new Rectangle(); for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); dirtyRect.add(graphicsComponent.getDirtyRect()); } gameView_.repaint(dirtyRect); } /** * Method: renderGameView * Renders the game objects onto the game view. * @param g The graphics object that draws the game objects. */ public void renderGameView(Graphics g) { for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); if (!graphicsComponent.isVisible()) continue; GraphicsComponent.Shape shape = graphicsComponent.getShape(); BoundsComponent boundsComponent = object.getComponent(BoundsComponent.class); Rectangle bounds = boundsComponent.getBounds(); g.setColor(graphicsComponent.getColor()); if (shape == GraphicsComponent.Shape.RECTANGULAR) { g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height, true); } else if (shape == GraphicsComponent.Shape.CIRCULAR) { g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height); } } } /** * Method: getRenderableObjects * @return The renderable game objects. */ private HashSet<GameObject> getRenderableObjects() { return gameManager.getGameObjectManager().getRelevantObjects( getClass()); } } Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time. I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something? EDIT: I might not have explained my framework well enough. My components are just data. If I was coding in C++, they'd probably be structs. Here's an example of one: public class BoundsComponent implements GameComponent { /** * The position of the game object. */ private Point pos_; /** * The size of the game object. */ private Dimension size_; /** * Constructor * Creates a new BoundsComponent for a game object with initial position * initialPos and initial size initialSize. The position and size combine * to make up the bounds. * @param initialPos The initial position of the game object. * @param initialSize The initial size of the game object. */ public BoundsComponent(Point initialPos, Dimension initialSize) { pos_ = initialPos; size_ = initialSize; } /** * Method: getBounds * @return The bounds of the game object. */ public Rectangle getBounds() { return new Rectangle(pos_, size_); } /** * Method: setPos * Sets the position of the game object to newPos. * @param newPos The value to which the position of the game object is * set. */ public void setPos(Point newPos) { pos_ = newPos; } } My components do not communicate with each other. Systems handle inter-component communication. My systems also do not communicate with each other. They have separate functionality and can easily be kept separate. The MovementSystem doesn't need to know what the RenderingSystem is rendering to move the game objects correctly; it just need to set the right values on the components, so that when the RenderingSystem renders the game objects, it has accurate data. The game state could not be a system, because it needs to interact with the systems rather than the components. It's not setting data; it's determining which functions need to be called. A GameStateComponent wouldn't make sense because all the game objects share one game state. Components are what make up objects and each one is different for each different object. For example, the game objects cannot have the same bounds. They can have overlapping bounds, but if they share a BoundsComponent, they're really the same object. Hopefully, this explanation makes my framework less confusing.

    Read the article

  • Windows 7 64-bit installation from alternative media (no DVD/USB Flash drive)

    - by Niels Willems
    Greetings I currently have Windows 7 x86 installed on my computer. I want to install Windows 7 x64 on a different partition on my computer. However there is a little issue, I cannot run the x64 install from Windows 7 x86 which I currently have. I was planning to Install Windows 7 x64 on another partition to then boot up from that partition to install it on the partition I actually want my OS on. Once that is complete I could just format the partition from the Windows 7 x64 that I didn't need anymore. But the installer will not run from the x86 version of Windows 7 even though I do not want to upgrade that Windows directly. The reason I'm doing this in such a weird way is that my optical drive is broken and I'm really not into buying a new one since I would use it like once every year or so. I also don't have a USB Flash Drive which is big enough to hold the installation files. As far as I'm aware I cannot use an external hard drive such as this one, which I do have. Are there any alternatives in which I can install Windows 7 x64 or am I forced into buying a USB Flash Drive or new optical drive? Thank you in advance for your replies. Edit: This picture shows my current partitions on my laptop. I want to get Windows 7 x64 on the C partition but have to install it first on the F partition to then boot up the F partition windows to format C and install x64 on that one. My external drive is J. Edit 2: No alternative computer which has a DVD drive, install files are located on an iso from MA3D. To install my 32 bit version I mounted the ISO in Daemon Tools to replace my Windows Vista but since I cannot run 64 bit into my 32 bit OS this doesn't work.

    Read the article

  • Can't mount cds after failed Brasero burn

    - by Allan
    Brasero failed to burn a disk recently, and now I can't access CD-ROMS. Even CD-Rs are not not showing. Using Ubuntu 11.10 on a Dell D510 Lattitude laptop. allan@allan-Latitude-D510:~$ cdrecord -checkdrive Device was not specified. Trying to find an appropriate drive... Detected CD-R drive: /dev/cdrw Using /dev/cdrom of unknown capabilities Device type : Removable CD-ROM Version : 5 Response Format: 2 Capabilities : Vendor_info : 'TSSTcorp' Identification : 'DVD+-RW TS-L532B' Revision : 'DE04' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Using generic SCSI-3/mmc CD-R/CD-RW driver (mmc_cdr). Driver flags : MMC-3 SWABAUDIO BURNFREE Supported modes: TAO PACKET SAO SAO/R96P SAO/R96R RAW/R96R My CD drive is now useless, and any help on getting it to read/burn would be appreciated.

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Liskov substitution and abstract classes / strategy pattern

    - by Kolyunya
    I'm trying to follow LSP in practical programming. And I wonder if different constructors of subclasses violate it. It would be great to hear an explanation instead of just yes/no. Thanks much! P.S. If the answer is no, how do I make different strategies with different input without violating LSP? class IStrategy { public: virtual void use() = 0; }; class FooStrategy : public IStrategy { public: FooStrategy(A a, B b) { c = /* some operations with a, b */ } virtual void use() { std::cout << c; } private: C c; }; class BarStrategy : public IStrategy { public: BarStrategy(D d, E e) { f = /* some operations with d, e */ } virtual void use() { std::cout << f; } private: F f; };

    Read the article

  • Drivers and firmware for the LiteOn ihap-122-9 DVD drive

    - by Sandy
    I'm trying to replace the DVD drive in my old PC. LiteOn.com is a mess and I can't find a single working driver or firmware update there, or anywhere. Windows XP tries to use a default, generic driver dated 2001. (About 9 years before this drive even existed.) http://www.firmwarehq.com/download_1..._6L0H.EXE.html This correctly finds my LiteOn ihap-122-9 DVD Drive. It correctly finds that I'm currently using firmware 6L0F. It correctly tries to install 6L0H. It completes 100% but then just fails and says "contact your vendor". Does anyone know why? Where can I actually get drivers... and firmware updates... that actually work for the ihap-122-9? Apparently, the newest driver IS the 1 made 9 years before the drive existed. (Unbelievable.) And the latest firmware is the 1 that is already in the drive. (Common.) No other drive I've had in this computer ever had a problem. This brand new LiteOn is doing this: Opening MY COMPUTER now takes 60 seconds. MY COMPUTER marking the drive as "DVD F:" takes another 30 seconds. MY COMPUTER showing "Batman II" title takes another 15 seconds. Clicking and running the movie will take another 30 seconds for the main-menu to appear. The movie starts about 20 seconds later. The movie runs fine for 1-2 seconds... then stops for 5 seconds.... then starts again and plays for 1-2 seconds. Repeats for 2 hours. (It happens with all store-bought DVDs and all home-made DVDs.)

    Read the article

  • Printer State: Idle - /usr/lib/cups/backend/dnssd failed

    - by bilbo88
    A printer was installed and worked but does not now. A job is successfully submitted, but keeps waiting there for printing (as seen from lpq command). Ping to printer works fine. (The printer is on and works as it prints from Windows.) But on the Printer Property, it shows Printer State: Idle - /usr/lib/cups/backend/dnssd failed. The following command does not help either. sudo service avahi-daemon restart Does any one has a solution? Thank you for help.

    Read the article

  • Examples of Liskov Substitution

    - by james lewis
    I'm facilitating a session next week on the Liskov Substitution Principle and I was wondering if anyone had any examples of violations 'from the trenches'? I'm looking for something other than uncle Bob's rectangle - square problem and the persistent set problem he talks about in A-PPP (although that is a great example). So far I'm using the example of a (very simple) List and an IndexedList as the 'correct' use of inheritance. And the addition of a Set to this hierarchy as a violation (as a Set is distinct; strengthening the pre condition of the Add method). I've also taken this great example and it's solution from this question Both those examples are great but I'm looking for something more subtle and harder to spot. So far I've come up with nothing so if you've got a great, subtle example post it up. Also, any metaphors you've come across that helped you understand LSP would be really useful too.

    Read the article

  • Liskov principle: violation by type-hinting

    - by Elias Van Ootegem
    According to the Liskov principle, a construction like the one below is invalid, as it strengthens a pre-condition. I know the example is pointless/nonsense, but when I last asked a question like this, and used a more elaborate code sample, it seemed to distract people too much from the actual question. //Data models abstract class Argument { protected $value = null; public function getValue() { return $this->value; } abstract public function setValue($val); } class Numeric extends Argument { public function setValue($val) { $this->value = $val + 0;//coerce to number return $this; } } //used here: abstract class Output { public function printValue(Argument $arg) { echo $this->format($arg); return $this; } abstract public function format(Argument $arg); } class OutputNumeric extends Output { public function format(Numeric $arg)//<-- VIOLATION! { $format = is_float($arg->getValue()) ? '%.3f' : '%d'; return sprintf($format, $arg->getValue()); } } My question is this: Why would this kind of "violation" be considered harmful? So much so that some languages, like the one I used in this example (PHP), don't even allow this? I'm not allowed to strengthen the type-hint of an abstract method but, by overriding the printValue method, I am allowed to write: class OutputNumeric extends Output { final public function printValue(Numeric $arg) { echo $this->format($arg); } public function format(Argument $arg) { $format = is_float($arg->getValue()) ? '%.3f' : '%d'; return sprintf($format, $arg->getValue()); } } But this would imply repeating myself for each and every child of Output, and makes my objects harder to reuse. I understand why the Liskov principle exists, don't get me wrong, but I find it somewhat difficult to fathom why the signature of an abstract method in an abstract class has to be adhered to so much stricter than a non-abstract method. Could someone explain to me why I'm not allowed to hind at a child class, in a child class? The way I see it, the child class OutputNumeric is a specific use-case of Output, and thus might need a specific instance of Argument, namely Numeric. Is it really so wrong of me to write code like this?

    Read the article

  • Why does my application hang in interruptible state?

    - by Greg
    Hello, I am running on Ubuntu 10.04 LTS Server an GTK application that used to work fine. It suddenly started to hang in interruptible state (Sl+) without any apparent reason. Here is a snippet of the strace: poll([{fd=3, events=POLLIN}, {fd=5, events=POLLIN}], 2, 0) = 0 (Timeout) poll([{fd=5, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=5, revents=POLLOUT}]) writev(5, [{"\2\30\4\0\224\4\240\0\0@\0\0\37\0\240\0\2\4\4\0\224\4\240\0\0@\0\0\37\0\240\0"..., 192}, {NULL, 0}, {"", 0}], 3) = 192 read(5, 0x2ba9ac4, 4096) = -1 EAGAIN (Resource temporarily unavailable) I googled that line read(5, 0x2ba9ac4, 4096), which seems to be more significant, and it seems that many other applications tend to have the same problem. I tried to restart my X server, but it didn't help. Do you have an idea how to solve this problem?

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • External SATA drive does not work without the optional USB cable *also* connected

    - by Software Monkey
    I have Vantec NST-260SU external eSATA/USB drive enclosure (which came with an optional separate power supply) connected to a relatively new Windows 7 computer. The drive should work as a SATA drive with either the separate power supply or using a USB cable solely for power. I would prefer to use the external power supply because I have used all my rear USB ports. Now, if I connect both the eSATA and USB cable, then: The drive shows in the BIOS list of AHCI drives (and not in the list of attached USB devices). Everything I can see about it in Computer Management seems to show it as a SATA driver (for example, it shows as "Location 0 (Channel 5, Target 0, Lun 0)" like my other SATA drives (and not "on USB Mass Storage Device" like my USB flash-drives). It seems very fast, very much faster than my USB flash drives. However, if I disconnect the USB cable and attach the power adapter instead, the drive does not show in the BIOS list and cannot be seen by Windows. The power LED on the enclosure is lit, and the drive enclosure becomes warm after running for a bit, so I am sure it is receiving power. Does anyone know if this device requires both the USB and eSATA cable, and if so, why? Or is there possibly something I need to do to reset the enclosure to not need the USB - the install instructions are pretty clear that you must connect the SATA cable before connecting the USB cable in order for the drive to function as SATA, which I am sure I did. PS: I have reviewed the small manual which came with it, which has not been of help.

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >