Search Results

Search found 21227 results on 850 pages for 'zombie process'.

Page 503/850 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • Rails: The Law of Demeter [duplicate]

    - by user2158382
    This question already has an answer here: Rails: Law of Demeter Confusion 4 answers I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Why is access to my database very slow?

    - by Fabien
    I have a mysql database that used to work perfectly fine, but now it is dead slow on startup. When I type in $> mysql -u foo bar I get the following usual message for about 30 seconds before I get a prompt : Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Of course, I tried it and it goes a lot faster : $> mysql -u foo bar -A But why do I have to wait so long in regular startup ? This is not a very big database, and data does not seem to be corrupted (everything looks fine after startup). I have no other client connecting to the mysql server at the same time (only one process is shown with the command show full processlist) and I have already restarted the mysqld service. What's going on ?

    Read the article

  • Stuck GhostScript processes, how to debug?

    - by Jonathan
    Having a problem with Ghostscript processes that don't end. This does not happen often, probably once every 3 weeks we see this issue with 1-3 processes. Running CentOS 6.4 on a VPS from Rackspace. We use PrinceXML to generate PDFs which uses GhostScript to handle fonts. Here's an image of top: http://i.stack.imgur.com/J9D7D.jpg As you can see those two processes are using a lot of resources, I haven't killed them yet in hopes someone can help me diagnose. I'm a developer not a server admin so I have a basic knowledge of *nix but no clue on how to fix this. Installed strace and ran it on each process with the following command: strace -p 20619 -s 80 -o gs.txt Left it for 5m, gs.txt is empty? Thanks in advance!

    Read the article

  • Hosted EBS 11i Integration Repository Temporarily Offline

    - by Steven Chan (Oracle Development)
    Most developers know that they can integrate their external applications with the E-Business Suite via the business service interfaces and SOA service endpoints documented in the E-Business Suite's Integration Repository.  This is shipped as part of EBS 12.  Until recently, it was provided as a hosted environment on the Oracle.com domain for EBS 11i. Unfortunately, we identified some standards-related issues in the process of switching from the existing server that hosts the EBS 11i environment to a new one, notably in the area of accessibility. Some of those issues will require coding changes to resolve.  Given our focus on EBS 12.2 right now, it may take some time to prioritize this relative to our other existing commitments. In the meantime, we are required to suspend access to the EBS 11i Integration Repository.  I don't have a firm schedule for getting this back online yet, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.    Related Articles Integration Repository for the E-Business Suite New Whitepaper: Primer on Integrating with EBS 12 with Other Applications

    Read the article

  • Oracle Logical Standby redo generation

    - by DCookie
    Oracle 10.2.0.4 database with a logical standby on Win2K3. Recently a rather large delete operation was carried out on the production instance. I'm experiencing difficulty with the logical standby, in that it gets a couple of hundred (58M size) archive logs into the operation and the apply process fails with an out-of-memory error. Unfortunately, every time it fails it has to restart the apply from the beginning of the transaction. This is taking a couple of days each time. Anyway, in trying to resolve this problem, I've noticed that each archive log from the production system generates 5 or 6 log switches on the standby. I don't understand why this should be. Anyone have any ideas? A related question that I've not found the answer for: does anyone know if the logical standby must be running in archivelog mode? I really don't have a need to keep the logs.

    Read the article

  • not able to install g++ and gcc on debian

    - by austin powers
    Hi , I want to use directadmin as my web control panel and it needs several packages like g++ , gcc and etc... as usuall I started to type apt-get install g++ and there problems start : dependecy error... then I tried to apt-get -f install and I got this error (Reading database ... 15140 files and directories currently installed.) Removing libc6-xen ... ldconfig: /etc/ld.so.conf.d/libc6-xen.conf:6: hwcap index 0 already defined as nosegneg dpkg: error processing libc6-xen (--remove): subprocess post-removal script returned error exit status 1 Errors were encountered while processing: libc6-xen E: Sub-process /usr/bin/dpkg returned an error code (1) what shoud I do? I want to install g++ and all of its dependencies due to using of directadmin I need it. regards.

    Read the article

  • What software works well for viewing massive TIFF images on Windows 7?

    - by nhinkle
    Today I saw an article about a half-gig, 24000 square-pixel high-res composite image of the moon. (This is a much smaller version of the image) I find astronomy interesting, so I thought I'd download it and take a look. With 4GB of RAM and an i5 processor, I figured my computer could handle it. Unfortunately, the built-in Windows Picture Viewer didn't do such a great job. While it opened the file without a problem, zooming in was ineffective. The zoomed out image loaded, but zooming in just showed a scaled-up version of the zoomed-out version, not any detail: Closing the picture viewer also took a very long time, and the whole process used up much more RAM than the 500MB of the picture (usage went from 1.3GB to 3.8GB). What other software would work better for this? I would prefer something that is free and fairly simple. I don't really want to use an editor (like photoshop or GIMP), just a nice lightweight viewer. Any suggestions?

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • Rsync over ssh with root access on both sides

    - by Tim Abell
    Hi, I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process. As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides. I've seen a few related questions, but none quite match what I'm trying to do. I have sudo set up and working on both servers.

    Read the article

  • Is there an OpenID demo server out there?

    - by billpg
    Hi everyone. I'm doing some experiements with adding OpenID to something I'm working on, and I'd like to test out a few providers. Is there a server out there that will go through the OpenID login process (same way that the StackOverflow group does) and tell me all the information the provider shows. I imagine it would work like... I go to example.com and type in https://www.google.com/accounts/o8/id example.com bounces me to google. I log in. Google asks me to confirm if I allow example.com access to everything. Google bounces me back to example.com example.com tells me my OpenID, email address, anything else it's got. Does such a thing please already exist?

    Read the article

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • Life Is Full Of Changes (Part 1)

    - by Brian Jackett
    Today will be my last day with Sogeti.  I’ve been with Sogeti USA for just over 4 years.  In that time I’ve gotten to work on some great projects, develop relationships with some brilliant and passionate people, participate in the .Net developer and SharePoint communities, and grow my skills in a number of areas I’m passionate about.     As with all good things they must come to an end though.  I’ve accepted a position with another company and will provide more details once the transition has completed.  This decision was a difficult one to make but it provides a great career opportunity on many levels.  As much as my new schedule allows I plan to continue participating in local user groups, speaking at conferences, and blogging.     Speaking of which, you may have noticed my reduced blogging activity in the past few months.  In addition to a career change I’m also in the process of moving to a new residence (only a few miles from my current residence, so I’ll still be in Columbus.)  Searching for a new place, filling out paperwork, and all of the other work associated with this move has taken away a good chunk of the time I used to devote to blogging.  Once everything gets settled out with the move and job change I’ll re-evaluate how much time I can devote to blogging.     A big thanks to Sogeti and everyone who has been so supportive over my time with them.  It’s hard to move on, but I am excited for the prospects that the future will bring.         -Frog Out

    Read the article

  • Error 404 when accesing newly created ASP.NET website on IIS 7.0

    - by Wodzu
    I've created an ASP.NET website and published it to a file from visual studio. Then I've copied my folder to the inetpub\wwwrot directory. Next under IIS I've converted this folder to the application. Unfortunately, when I try to acces it like this: http://localhost/myappname I am getting 404 error. I was thinking that maybe IIS is not configured to process aspx files, but under http://localhost/default.aspx there is a working sharepoint website. Have you got any ides where might be the problem?

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • Restore iPod Touch

    - by Jason
    Ok.... So I'm part of the iOS developer program and downloaded iOS 6.0 Beta to my iPod Touch. But, I was having troubles getting XCode Beta to run so I tried to downgrade back to the current 5.1.1. Halfway through the downgrade, the process encountered an error. Now my iPod won't boot. I tried restoring it in iTunes, but I think it's trying to restore it to version 6.0, not finding it on the Apple website since it's not out yet, and failing. Any ideas on how to restore my ipod??

    Read the article

  • Debugger for file I/O development?

    - by datenwolf
    Okay, the question title may be a bit cryptic. But it aptly describes what I'm looking for: I think every experienced coder went through this numerous times: You get a binary file format specification, you implement the reader for it, and… nothing works like expected. So you run your code in the debugger, go execute through the code line by line, every header field is read in seemingly correct, but when it comes to the bulk data, offset and indices no longer match up. What would really help in this situation was a binary file viewer, that shows you the progress of your file pointer, as you step through the code, and ideally would also highlight all memory maps. Then you could see the context of the current I/O operations, most notably those darn "off-by-one" mistakes, which are even more annoying when reading a file. Implementing such a debugger should not be too hard. traces on the process' file descriptors/handles and triggers on the I/O functions, to update the display. Only: I don't know of such a kind of debugger to exist. Do I just lack knowledge about the existance of such a tool, or is there really no such thing?

    Read the article

  • How much effort is involved in moving a WordPress site to a private server? [on hold]

    - by Alan
    I work in tech, but am on the business side. I have a WordPress site that I would like to move to a personal server and associate with a new domain name. I already have a server (actually, a friend is letting me use his) and the domain name. A friend-of-a-friend, who claims to be an IT pro, has agreed to help, but now is asking for what feels like a lot of money for what he says is a pretty time-intensive job. This doesn't sound right to me, so I thought I would ask here: Would it take months or even days to move the content, and why would it have to be moved in stages? The blog currently uses a basic template and has about 1000 posts. How much effort is really involved in moving a WordPress site from one server to another? Can anyone explain the process? Would it just make more sense to point the domain name at the existing WordPress blog, and pay the nominal yearly fee? I appreciate any answers you can provide.

    Read the article

  • Employers and intellectual property 2

    - by Rick
    I have a question about intellectual property, I am currently a manager in a small manufacturing firm. The owners are driven by greed and don't appreciate the development process of complex machinery and are happy just to send things out half done. I on the other hand think that it should be done properly as breakdown in the field can be costly, embarrassing. They seem to have all of us running around doing most of the work out of hours using the attitude of "Be grateful to have a job" yet no one has a contract or any security or any agreement in place. For a couple of the projects i am using PLC's and doing the code in my own time and the testing during company time, and i am aware that they cannot support their own machines if i left, but as i created the code in my own time who owns it? The have asked my to put in a shutdown code for a maintenance request after a given length of time, could this be classed as criminal damage or anything illegal apart from immoral? (we sell the machines with 12 month warrantee, shut down after) But as time goes on I'm getting rather fed up of the companies attitude toward the client. I am considering keeping the clients as my own and get them to contact me directly In the shutdown code. By doing something like this is a trial version contact me for a full license? I wouldn't feel bad for my current employer as he is not afraid to S***t on people as he has been evolved in numerous law suits and has over 30 failed companies leaving people and customers high and dry, we have took the company this far on the reputation of the workers and and i can see things heading like all the other companies he has owned and taking our reputations with him. So i suppose now i have set the scene, if i code into it to contact me directly in the shutdown could there be any legal impact on me, as i rightly or wrongly think i own the code and designs? Cheers R

    Read the article

  • Search Domain Not Working With Squid

    - by Kyle Brandt
    I just set up a squid proxy as a parent proxy to HAVP. When I or other users try to access a domain with an address like "http://foo" I get the following squid error in the browser: The dnsserver returned: Server Failure: The name server was unable to process this query. However, "http://foo.companyname.com" works fine. The search domain in resolv.conf on both the client and proxy host is companyname.com. (There a better term for "search domain"?) Is there a way to correct this, maybe something in the squid.conf file?.

    Read the article

  • How can I use encryption with Gmail?

    - by Torben Gundtofte-Bruun
    I'm currently reading Cory Doctorow's novel Little Brother which includes a part about encrypted messaging, and even wrapping messages first in my private key and then your public key. I'd like to play around with that but from what I've googled so far it seems to be a rather convoluted process, requiring installing several program components, and creating an encrypted message requires doing some manual file manipulation. I'm surprised that I can't find something like a Firefox plugin that integrates encryption into Gmail. I've seen that there is a Thunderbird PGP plugin, but I don't use T-bird. I also saw a blog post that Google apparently toyed with PGP support in 2009, but nothing has appeared in the meantime. Question: To use encryption with Gmail, is there a simpler method than creating a file locally, then encrypting that file, and finally attaching it to a regular Gmail message?

    Read the article

  • Automating the insertion of credits in video files on Mac OS X

    - by Roberto Aloi
    I have a bunch of video files (mp4). I need to insert some title information at the beginning and some credits at the end. I'm currently using iMovie. Since the title could be extracted from the filename and the credits are always the same, I'm wondering how could I make all this process as automatic as possible. I was thinking about Automator, but I'm open to any other solution. iMovie is the preferred tool so far, but I could use anything else (as far as it doesn't require additional licensing/cost). Any idea?

    Read the article

  • How to pipe differently the body of the curl answer and the printed output?

    - by Antoine Lizée
    I would like to print in the command line some output of curl, like the http headers, followed by the body of the answer processed by a stdin/stdout program. For instance: Print the status code: curl -s -w "%{http_code} \\n" -o "/dev/null" http://myURL.com And then process the output with a json parsing tool: curl -s http://myURL.com | python -mjson.tool I would like to do both with one command, and I have the feeling that it may be possible thanks to the -o option that makes the difference between the output of curl and the actual answer from the query. The problem is that -o writes directly to a file. Somebody's got a hack?

    Read the article

  • How can I instruct nautilus to pre-generate PDF thumbnails?

    - by Glutanimate
    I have a large library of PDF documents (papers, lectures, handouts) that I want to be able to quickly navigate through. For that I need thumbnails. At the same time however, I see that the ~/.thumbnails folder is piling up with thumbs I don't really need. Deleting thumbnail junk without removing the important thumbs is impossible. If I were to delete them, I'd have to go to each and every folder with important PDF documents and let the thumbnail cache regenerate. I would love to able to automate this process. Is there any way I can tell nautilus to pre-cache the thumbs for a set of given directories? Note: I did find a set of bash scripts that appear to do this for pictures and videos, but not for any other documents. Maybe someone more experienced with scripting might be able to adjust these for PDF documents or at least point me in the right direction on what I'd have to modify for this to work with PDF documents as well. Edit: Unfortunately neither of the answers provided below work. See my comments below for more information. Is there anyone who can solve this?

    Read the article

  • How to upgrade to Windows 8.1 on a machine with a Users folder on a separate drive?

    - by ahsteele
    I tried to upgrade from Windows 8 to Windows 8.1. Unfortunately, during the upgrade process I receive the following error: Sorry, it looks like this PC can't run Windows 8.1. This might be because the Users or Program Files folder is being redirected to another partition. Which is accurate in that I have my Users directory on my D: drive and Windows installed on my C: drive. I do this because my C: drive is an SSD drive and D: drive is a spinning rust drive where I keep my data. Is it possible to upgrade to Windows 8.1 from a Windows 8 install with a redirected Users folder? I do not consider a full reinstall of Windows 8 with a non-mapped Users folder and then upgrading that installation to be "upgrading."

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >