Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 35/933 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Which style of return is "better" for a method that might return None?

    - by Daenyth
    I have a method that will either return an object or None if the lookup fails. Which style of the following is better? def get_foo(needle): haystack = object_dict() if needle not in haystack: return None return haystack[needle] or, def get_foo(needle): haystack = object_dict() try: return haystack[needle] except KeyError: # Needle not found return None I'm undecided as to which is more more desirable myself. Another choice would be return haystack[needle] if needle in haystack else None, but I'm not sure that's any better.

    Read the article

  • Is it better to create methods with a long list of parameters or wrap the parameters into an object?

    - by GigaPr
    Hi, Is it better(what is the best practice) to create methods with a long list of parameters or wrap the parameters into an object? I mean lets say i have a Client data type with a long list of properties and i want to update all the properties at once. is it better to do something like public int Update(int id, string name, string surname, string streetAddress, string streetAddress2, string postcode, string town, string city, string nationality, string age, string gender,string job){ } or wrap all the properties in a object and do something like public int Update(Client client){} thanks

    Read the article

  • Is the Intel Core i5 mobile processor better than the Intel Core i7 mobile processor?

    - by Tim Barnaski
    I'm shopping for a new laptop from Dell (going to install Ubuntu on it) and I'm currently trying to sort out which processor I want. Based purely on the clock speed, it would seem that the core i5 is better than the core i7. The Intel® Core™ i5-430M has a 2.26GHz clock speed, while the Intel® Core™ i7-720QM Quad Core Processor only has a 1.6GHz clock speed. Would this not indicate that the i5 is faster than the i7?

    Read the article

  • Which web server architecture do you think is better?

    - by ngache
    use apache to server dynamic requests that need to be processed by php,and use nginx to serve static files use nginx to serve all requests So the key point is: which of them is more efficient in serving dynamic requests(we have no doubt that nginx is much better than apache in serving static files)?

    Read the article

  • Seeking (somewhat) better explanations about supporting > 2.1 TB hard drives.

    - by irrational John
    Today while Googling about I stumbled across posts claiming that Seagate plans to ship a 3TB drive sometime later in 2010. Unfortunately, the stuff I looked at all seemed to contain tidbits of info which I didn't think fit together properly. (I would link to some examples, but I'm only allowed 1 link per post at the moment). Now I really don't have any "need" to better understand the underlying tedious details of this. I am just curious. And confused. So ... some questions I'm hoping someone better informed than I might answer. The talk about a potential addressing problem in both the hardware and the software confused me. The assertion is that something called something called Long LBA addressing (LLBA) is needed in the Command Descriptor Block as a way to get around the current limits to access a hard drive bigger than ~2.1 (or ~2.2?) TB. OK, fine. But I thought the last time this problem came up it was solved by extending the length of the LBA field from 28 to 48 bits. (Remember this website? www.48bitlba.com) A 6 byte LBA is clearly large enough, so what's up with this LLBA talk. I thought this was all fixed back by Win XP SP2, if not sooner? And certainly all the hardware should be up to the task, shouldn't it? The real problem as I understand it with drives much bigger than 2 TB are the 4 byte LBA fields in the Master Boot Record (MBR) used to partition just about all hard drives at the moment. The most likely solution is to migrate to Intel's GUID Partition Table (GPT). A GPT uses 8 byte fields for the LBA. What I don't understand in this context is what is the problem with booting say Windows from a 3TB drive that uses a GPT. Granted, the current PC BIOS wouldn't know how to recognize or work with a GPT. But every GPT comes with a so-called "Safety" or "Guarding" MBR in sector 0.Apple already uses a hybrid version of the MBR to allow them to boot Windows on their Intel Macs (aka Boot Camp). Couldn't something similar be done to allow the PC BIOS to recognize and boot from a partition in, say, the first 1 GB of a 3GB or larger drive? I've got more questions such as where do 4K sectors fit into all of this. But it's probably time I just shut up and posted this. ;-) -irrational john

    Read the article

  • Do certain usb ports work better on my IBM T60?

    - by Xavierjazz
    Hi. I am using a Microsoft wireless phone ear piece and I have had the receiver plugged into the USB port on the Left Hand side. I have been getting intermittent success with the signal. I have recently tried plugging it in to one of the ports on the top Right Hand side, and it seems that I am getting a better signal. Is there a difference between the ports? Thanks.

    Read the article

  • Install Visual Studio 2010 on SSD drive, or HDD for better performance?

    - by Steve
    I'm going to be installing Visual Studio 2010. I already have my source code on the SSD. For best performance, especially time to open the solution and compiling time, would it be better to install VS 2010 on the SSD or install it on the HDD. If both were on the SSD, loading the VS 2010 files would be quicker, but there would be contention between loading the source and the program files. Thanks!

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • Install Visual Studio 2010 on SSD drive, or HDD for better performance?

    - by Steve
    I'm going to be installing Visual Studio 2010. I already have my source code on the SSD. For best performance, especially time to open the solution and compiling time, would it be better to install VS 2010 on the SSD or install it on the HDD. If both were on the SSD, loading the VS 2010 files would be quicker, but there would be contention between loading the source and the program files. Thanks!

    Read the article

  • How can I better collaborate the colors between my iMac and Macbook Air?

    - by kylehotchkiss
    I just got a Macbook Air and the differences in color between it and my 2009 iMac are driving me crazy. I know there were certain iMac models from that period with yellow tinting issues, but I did the gray bar tests and that doesn't appear to be the issue. (No hardware issue suspected) However, my iMacs tones are more yellow prone than the macbook and I was wondering how to collaborate these two devices better for design work.

    Read the article

  • Is it better to leave your computer on all the time?

    - by Joe Schmoe
    Most of the hardware failures I've had (especially hard drive crashes) have happened when turning the machine on, so is it better to leave your computer on all the time or not? For years, I've heard arguments for... no power surges on start-up steady operating temperature for components and against... unnecessary wear on hard drives power wastage and I'm still not sure.

    Read the article

  • Windows 7 Home Premium or better on a Netbook?

    - by Michael Stum
    I have a Netbook with Windows XP Home. Processor is an Atom N280 (1.66 GHz) with 2 GB of RAM. I noticed that newer Netbooks come with Windows 7, but as Starter Edition (which kinda sucks). I wonder if there is a technical reason for using Windows 7 Starter? Or would a better edition (x86) perform equally well? I'm currently considering Home Premium, but BitLocker and Offline Files might have me get Ultimate.

    Read the article

  • Better way to do "echo $x | sed ..." and "echo $x | grep ..."

    - by DevSolar
    I often find this in scripts (and, I have to admit, write it myself): a=`echo $x | sed "s/foo/bar/"` or if echo $x | grep foo then ... fi Consider "foo" to include some regex stuff. I feel that there should be - and most likely is - a better way to phrase this, one that does not involve two commands and a pipe but wraps the thing into some more compact expression. I just can't find it. Anybody?

    Read the article

  • This pagination script is doodoo - I need a better one!

    - by ClarkSKent
    Hello, I was looking at the pagination script (posted below) and found it to be gros,s and not very good at all especially when trying to customize it. This is what the main page looks like: <?php include('config.php'); $per_page = 9; //Calculating no of pages $sql = "select * from messages"; $result = mysql_query($sql); $count = mysql_num_rows($result); $pages = ceil($count/$per_page) ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/ libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" ></div> <ul id="pagination"> <?php //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li id="'.$i.'">'.$i.'</li>'; } ?> </ul> The top part of the code gets the results from the mysql db and than uses this information to display the numbers in the body of this page. I am trying to put something like this on a separate page like count_page.php and then just include it. I guess my question is, if there is a better way of doing the above with better structure. A better way to go through the db and count the results and display the appropriate numbers. The above seems messy. Thanks for any help or suggestions on this.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >