Search Results

Search found 60744 results on 2430 pages for 'why we write'.

Page 127/2430 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • why is Active Record firing extra query when I use Includes method to fetch data

    - by riddhi_agrawal
    I have the following model structure: class Group < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :products, :through => :group_products end class Product < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :groups, :through => :group_products end class GroupProduct < ActiveRecord::Base belongs_to :group belongs_to :product end I wanted to minimize my database queries so I decided to use includes.In the console I tried something like, groups = Group.includes(:products) my development logs show the following calls, Group Load (403.0ms) SELECT `groups`.* FROM `groups` GroupProduct Load (60.0ms) SELECT `group_products`.* FROM `group_products` WHERE (`group_products`.group_id IN (1,3,14,15,16,18,19,20,21,22,23,24,25,26,27,28,29,30,33,42,49,51)) Product Load (22.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` IN (382,304,353,12,63,103,104,105,262,377,263,264,265,283,284,285,286,287,302,306,307,308,328,335,336,337,340,355,59,60,61,247,309,311,66,30,274,294,324,350,140,176,177,178,64,240,327,332,338,380,383,252,254,255,256,257,325,326)) Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 I could analyze the initial three calls were necessary but don't get the reason why the last database call is made, Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 Any idea why this is happening? Thanks in advance. :)

    Read the article

  • Ubuntu, user can't write to a directory and I don't see why not.

    - by Peter
    I've got a directory, /var/www/someProject/backup/mysql, and I want the user mysql to write to it. Each time I try to write to it with the mysql user, I get a "can't read/write" error. Yet the directory is 777 as you can see here: drwxrwxrwx 2 aUser users 4096 2010-03-17 17:14 mysql I also tried to chown the directory to mysql:mysql, just like the home dir of the mysql user, but no luck, that changed nothing. What am I doing wrong here? Or is the mysql user limited to his home dir in some other way in Ubuntu? Been bugging me for days now, this problem so any help greatly appreciated.

    Read the article

  • Read/WRITE/Verify disk diagnostic tool for Mac OS X?

    - by Spiff
    It seems that there are many tools out there for Mac OS X that test a hard drive for bad blocks by doing a Read/Verify pass. That is, they read a block, then read it a second time, and verify that both reads yielded the same results. I need a tool that does a non-destructive Read/Write/Verify pass. It should read each block, write those same contents back out, and then read it again to verify. That way every block gets written, giving the hard drive a chance to spare out bad blocks. But since the same contents that were just read get written back out, it doesn't destroy data that wasn't already lost. I'm aware of several tools that can do Read/Verify, but I'm not aware of any that do Read/Write/Verify. Are there any tools that do what I want? Unix / open source tools that compile and run on Mac OS X are fair game too.

    Read the article

  • File property information (last write time and file size) in explorer out of date by hours over netw

    - by David L Morris
    An application is running on a windows XP prof machine picking up file from a network share from another windows machine. It detects that the file has been updated (by date and time or optionally file size) and reads it for any new data. Most of the time the last write time and file size, seems to be up to date. Occasionally, this information stops being updated, even though the file is growing (intermittently during the day) with appended content, so that the last write time and file size remain fixed at some arbitrary moment. This is visible in explorer, where it shows a fixed last write time on the reading machine. Just opening the file to edit it in notepad, immediately refreshes the file properties, and the other application picks up where it left of. The file location can't be changed, nor the location of the relevant applications. Any solutions to resolve this problem?

    Read the article

  • Is there an encrypted write-only file system for Linux?

    - by Grumbel
    I am searching for an encrypted filesystem for Linux that can be mounted in a write-only mode, by that I mean you should be able to mount it without supplying a password, yet still be able to write/append files, but neither should you be able to read the files you have written nor read the files already on the filesystem. Access to the files should only be given when the filesystem is mounted via the password. The purpose of this is to write log files or similar data that is only written, but never modified, without having the files themselves be exposed. File permissions don't help here as I want the data to be inaccessible even when the system is fully compromised. Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't easily get access to the filesystem as a whole, you have to pipe each file through gpg --decrypt manually.

    Read the article

  • Do I need to run a verfication on LTO tape backups even though the drives themselves perform verification as they write?

    - by ObligatoryMoniker
    We have an LTO-3 Tape drive in a Dell media library that we use for our tape backups. The article about LTO on Wikipedia states that: LTO uses an automatic verify-after-write technology to immediately check the data as it is being written, but some backup systems explicitly perform a completely separate tape reading operation to verify the tape was written correctly. This separate verify operation doubles the number of end-to-end passes for each scheduled backup, and reduces the tape life by half. What I would like to know is, do I need my backup software (Backup Exec in this case) to perform a verify on these tapes or is the verify-after-write technology inherent in LTO drives sufficient? I would also be curious if Backup Exec understands the verify-after-write technology enough to alert me if that technology couldn't veryify the data or will it just ignore it making it useless anyway since even if the drive detecs a problem I would never know about it.

    Read the article

  • Are there ways to write php/python code to run as hooks in the Apache Request Processing pipeline?

    - by SB
    Does anybody know of any modules that provide the functionality to write python or PHP code to run as hooks in the Apache request processing pipeline? For instance, mod_perl lets me write PerlModules, which can contain handlers for the header parsing phase, content delivery, and even filters. I would like to do something similar in other scripting languages. I could write it in C, but the goal is to deploy a module that would work across a number of systems. If I deliver it as binary in C, then it would require 64/32-bit versions and some other issues. With perl, I can just require certain modules installed and mod_perl2.

    Read the article

  • Is there an encrypted write-only file system for Linux?

    - by Grumbel
    I am searching for an encrypted file system for Linux that can be mounted in a write-only mode, by that I mean you should be able to write/append files, but not be able to read the files you have written. Access to the files should only be given when the filesystem is mounted via a password. The purpose of this is to write log files and such, without having the log files themselves be accessible. Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't get easy access to the file system as a whole, you have to pipe each file through gpg --decrypt manually.

    Read the article

  • How can I trace NTFS and Share Permissions to see why I can (or can't) write a file

    - by hometoast
    I'm trying to track down WHY I can write in a folder that, by my best estimation, I should not be able to write. The folder is shared with "Everyone" has "Full Control", with the files being more restrictive. My best guess is there's some sort of sub-group membership that's allowing me to write, but the nesting of groups that exists in our Active Directory is pretty extensive. Is there a tool, that will tell me which of the ACL entries allowed or disallowed my writing a file in a folder? The Effective Permissions dialog is marginally helpful, but what I need is something like a "NTFS ACL Trace Tool", if such a thing exists.

    Read the article

  • Ubuntu USB flash boot drive gets spontaneous "Unhandled sense code" error and causes drive to switch to Write protected

    - by Steve
    What happens is that the system runs fine for several days or even a week and then suddenly the root file-system / goes read-only. Looking at the syslog it shows that there was an 'Unhandled sense code'. This is under Ubuntu 10.04 but I saw the same thing with Ubuntu 9 with different flash media. /dev/sdg1 on / type ext4 (rw,errors=remount-ro) Jun 26 08:50:04 host1 kernel: [926247.565090] sd 5:0:0:0: [sda] Unhandled sense code Jun 26 08:50:04 host1 kernel: [926247.565094] sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 26 08:50:04 host1 kernel: [926247.565098] sd 5:0:0:0: [sda] Sense Key : Data Protect [current] Jun 26 08:50:04 host1 kernel: [926247.565103] sd 5:0:0:0: [sda] Add. Sense: Write protected Jun 26 08:50:04 host1 kernel: [926247.565108] sd 5:0:0:0: [sda] CDB: Write(10): 2a 00 00 46 29 18 00 00 08 00 Jun 26 08:50:04 host1 kernel: [926247.565117] end_request: I/O error, dev sda, sector 4598040 Jun 26 08:50:04 host1 kernel: [926247.569788] Buffer I/O error on device sda1, logical block 574499 Jun 26 08:50:04 host1 kernel: [926247.574677] lost page write due to I/O error on sda1

    Read the article

  • Why "no projects found to import"?

    - by Roman
    I am trying to "import existing project into workspace". As the "root directory" I select the directory where all my .java (and .class) files are located. Eclipse writes me that "no projects are found to import". Why?

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • two's complement, why the name "two"

    - by lenatis
    i know unsigned,two's complement, ones' complement and sign magnitude, and the difference between these, but what i'm curious about is: why it's called two's(or ones') complement, so is there a more generalize N's complement? in which way did these genius deduce such a natural way to represent negative numbers?

    Read the article

  • Why is a DataGridView so row-centric.

    - by Spike
    Why is there a DataGridViewRow.Cells property, but not a DataGridViewColumn.Cells property? What's so important about rows that I'll never want to iterate down a column? I'm not saying that it makes it particularly difficult to do or anything, it just strikes me as oddly asymmetrical. I'm implementing a "fill down" type behaviour, and it'd be handy is all.

    Read the article

  • Why meta refresh followed by 2 redirects?

    - by twneale
    I have encountered several websites where the initial visit by a user results in a http-equiv refresh to another (usually gibberish) url, which then promptly redirects (302) to another gibberish url, which in turn immediately redirects to yet a fourth url that actually displays the landing page for the site. My question is: what the heck? Why would a server be set up to behave this way? Here is list of a few sites that do this: New York State Library - http://nysl.nysed.gov New York State Regulations provided by Westlaw - http://government.westlaw.com/linkedslice/default.asp?SP=nycrr-1000

    Read the article

  • Why sometimes Windows cannot kill a process?

    - by Néstor Sánchez A.
    Right now I'm trying to Run/Debung my app on VisualStudio, but it cannot create it because the las instance of the app.vshost.exe is still running. Then, by using the Task Manager i'm trying to kill it, but it just remains there with no signal of activity. Beyond that particular case (maybe a VS bug), i'm very curious about the technical reasons why sometimes Windows cannot kill a process??? Can, an enlighted OS related developer, please try to explain? (And please don't start a Unix/Linux/Mac battle against Windows)

    Read the article

  • Why PreAuthenticate is not enabled by default?

    - by dr. evil
    As far as I understand WebRequest.PreAuthenticate is almost always good. If I enable it even when there is no credential it won't try to authenticate, if there is a credential it'll. So is there any legitimate reason to set it False? Or is it OK to set it True even when there is no credentials? And since it's quite useful why it's not enabled by default just like many other HTTP features?

    Read the article

  • Why use Monte-Carlo method?

    - by Gili
    When should the Monte-Carlo method be used? For example, why did Joel decide to use the Monte-Carlo method for Evidence Based Scheduling instead of methodically processing all user data for the past year?

    Read the article

  • Why did Dylan lose to Objective-C

    - by Adam Gent
    I have played/worked with many different programming languages and Dylan is still one of my favorites. My question is why did Dylan fail when Objective-C, Ruby and even Scheme have had more success? Was Dylans performance that much worse than Objective-C that Apple went with it or was purely for social/political reasons. Hopefully someone from apple will see this question :) BTW if you have no idea what Dylan is please google Dylan Progrmaming Language.

    Read the article

  • Why can't my Apache see my media folder?

    - by alex
    Alias /media/ /home/matt/repos/hello/media <Directory /home/matt/repos/hello/media> Options -Indexes Order deny,allow Allow from all </Directory> WSGIScriptAlias / /home/matt/repos/hello/wsgi/django.wsgi /media is my directory. When I go to mydomain.com/media/, it says 403 Forbidden. And, the rest of my site doesn't work because all static files are 404s. Why? Edit: hello is my project folder

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >