Search Results

Search found 10115 results on 405 pages for 'coding practices'.

Page 131/405 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • Pitfalls when switching to .NET for Windows CE?

    - by Presidenten
    Hi! I have been developing in .NET for quite some time now. But now I have customer who wants me to develop an application for them in .NET for Windows CE. I have done some embedded system programming in C before, but never in .NET. Please share any tips or tricks that would make my life easier when taking this assignment, or perhaps knowledge about any pitfalls to watch out for.

    Read the article

  • One class per file rule in .NET?

    - by Joan Venge
    I follow this rule but some of my colleagues disagree with it and argue that if a class is smaller it can be left in the same file with other class(es). Another argument I hear all the time is "Even Microsoft don't do this, so why should we?" What's the general consensus on this? Are there cases where this should be avoided?

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • Should I start with Trac 0.12 ?

    - by mree
    I'm going to start using Trac for the first time. From what I've gathered, the latest 0.12 is capable of supporting multiple project easily (which is something I will need since I got about 5 projects). However, it seems 0.12 is still in the development (0.12-dev). So, my question is, is it good enough for a newbie in Trac like me to use it? Does anyone has any experience using it ? It will be installed on a Linux server. BTW, I'll only be using the basic functions such as svn browser, wiki, tickets and others.

    Read the article

  • Are upper bounds of indexed ranges always assumed to be exclusive?

    - by polygenelubricants
    So in Java, whenever an indexed range is given, the upper bound is almost always exclusive. From java.lang.String: substring(int beginIndex, int endIndex) Returns a new string that is a substring of this string. The substring begins at the specified beginIndex and extends to the character at index endIndex - 1 From java.util.Arrays: copyOfRange(T[] original, int from, int to) from - the initial index of the range to be copied, inclusive to - the final index of the range to be copied, exclusive. From java.util.BitSet: set(int fromIndex, int toIndex) fromIndex - index of the first bit to be set. toIndex - index after the last bit to be set. As you can see, it does look like Java tries to make it a consistent convention that upper bounds are exclusive. My questions are: Is this the official authoritative recommendation? Are there notable violations that we should be wary of? Is there a name for this system? (ala "0-based" vs "1-based")

    Read the article

  • Could this be considered a well-written PHP5 class?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • How to track IIS server performance

    - by Chris Brandsma
    I have a reoccurring issue where a customer calls up and complains that the web site is too slow. Specifically, if they are inactive for a short period of time, then go back to the site, there will be a minute-two minute delay before the user sees a response. (the standard browser is Firefox in this case) I have Perfmon up and running, the cpu utilization is usually below 20% (single proc...don't ask). The database is humming along. And I'm pulling my hair out. So, what metrics/tools do you find useful when evaluating IIS performance?

    Read the article

  • Books on Debugging Techniques?

    - by zooropa
    Are there any books on debugging techniques? A friend of mine is learning to code and he asked me this question. I told him I don't know of any. Is it that you just have to go through the School of Hard Knocks to learn?

    Read the article

  • What's quicker and better to determine if an array key exists in PHP?

    - by alex
    Consider these 2 examples $key = 'jim'; // example 1 if (isset($array[$key])) { doWhatIWant(); } // example 2 if (array_key_exists($key, $array)) { doWhatIWant(); } I'm interested in knowing if either of these are better. I've always used the first, but have seen a lot of people use the second example on this site. So, which is better? Faster? Clearer intent? Update Thanks for the quality answers. I now understand the difference between the 2. A benchmark states that isset() alone is quicker than array_key_exists(). However, if you want the isset() to behave like array_key_exists() it is slower.

    Read the article

  • Truth tables in code? How to structure state machine?

    - by HanClinto
    I have a (somewhat) large truth table / state machine that I need to implement in my code (embedded C). I anticipate the behavior specification of this state machine to change in the future, and so I'd like to keep this easily modifiable in the future. My truth table has 4 inputs and 4 outputs. I have it all in an Excel spreadsheet, and if I could just paste that into my code with a little formatting, that would be ideal. I was thinking I would like to access my truth table like so: u8 newState[] = decisionTable[input1][input2][input3][input4]; And then I could access the output values with: setOutputPin( LINE_0, newState[0] ); setOutputPin( LINE_1, newState[1] ); setOutputPin( LINE_2, newState[2] ); setOutputPin( LINE_3, newState[3] ); But in order to get that, it looks like I would have to do a fairly confusing table like so: static u8 decisionTable[][][][][] = {{{{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}, {{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}}, {{{ 0, 0, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}, {{{{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}, {{{ 0, 1, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}; Those nested brackets can be somewhat confusing -- does anyone have a better idea for how I can keep a pretty looking table in my code? Thanks! Edit based on HUAGHAGUAH's answer: Using an amalgamation of everyone's input (thanks -- I wish I could "accept" 3 or 4 of these answers), I think I'm going to try it as a two dimensional array. I'll index into my array using a small bit-shifting macro: #define SM_INPUTS( in0, in1, in2, in3 ) ((in0 << 0) | (in1 << 1) | (in2 << 2) | (in3 << 3)) And that will let my truth table array look like this: static u8 decisionTable[][] = { { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }}; And I can then access my truth table like so: decisionTable[ SM_INPUTS( line1, line2, line3, line4 ) ] I'll give that a shot and see how it works out. I'll also be replacing the 0's and 1's with more helpful #defines that express what each state means, along with /**/ comments that explain the inputs for each line of outputs. Thanks for the help, everyone!

    Read the article

  • Securing input of private / protected methods?

    - by ts
    Hello, normally, all sane developers are trying to secure input of all public methods (casting to proper types, validating, sanitizing etc.) My question is: are you in your code validating also parameters passed to protected / private methods? In my opinion it is not necessary, if you securize properly parameters of public methods and return values from outside (other classes, db, user input etc...). But I am constantly facing frameworks and apps (ie. prestashop to name one) where validation is often repeated in method call, in method body and once again for securize returned value - which, I think, is creating performace overhead and is also a sign of bad design.

    Read the article

  • What is the standard way to bundle OSGi dependent libraries?

    - by Chris
    Hi, I have a project that references a number of open source libraries, some new, some not so new. That said, they are all stable and I wish to stick with my chosen versions until I have time to migrate to the newer versions (I tested hsqldb 2.0 yesterday and it contains many api changes). One of the libraries I have wish to embed is Jasper Reports, but as you all surely know, it comes with a mountain of supporting jar files and I have only need a subset of the mountain (known) therefore I am planning to custom bundle all of my dependant libraries. So: Does everyone custom-make their own OSGi bundles for open-source libraries they are using or is there a master source of OSGi versions of common libraries? Also, I was thinking that it would be far simpler for each of my bundles simply to embed their dependent jars within the bundle itself. Is this possible? If I choose to embed the 3rd party foc libraries within a bundle, I assume I will need to produce 2 jar files, one without the embedded libraries (for libraries to be loaded via the classpath via standard classloader), and one osgi version that includes the embedded libraryy, therefore should I choose a bundle name like this <<myprojectname>>-<<subproject>>-osgi-.1.0.0.jar ? If I cannot embed the open source libraries and choose to custom bundle the open source libraries (via bnd), should I choose a unique bundle name to avoid conflict with a possible official bundle? e.g. <<myprojectname>>-<<3rdpartylibname>>-<<3rdpartylibversion>>.jar ? My non-OSGi enabled project currently scans for custom plugins via scanning the META-INF folders in my various plugin jars via Service.providers(...). If I go OSGi, will this mechanism still work?

    Read the article

  • Why do people keep parsing HTML using regex? [closed]

    - by polygenelubricants
    As much as I love regular expressions, it's obvious to me that it's not the best tool for parsing HTML, especially given the numerous good HTML parsers out there. And yet there are numerous questions on stackoverflow that attempts to parse HTML using regex. And people would always point out what a bad idea that is in the comments. And the accepted answer would often have a disclaimer how this isn't really the ideal way of doing things. But based on the constant flow of questions, it still seems that people keep parsing HTML using regex, despite the perceived difficulty in reading and maintaining it (and that's putting correctness aside for now). So my question is: why? Is it because it's easy to learn? Is it because it's faster? Is it because it's the industry standard? Is it because there are already so many reusable regexes to build from? Is it because 100% correctness is never really the objective? (90% good enough?) etc... I'd also like to hear from the downvoters why they did so. Is it because: There's absolutely nothing wrong with using regex to parse HTML and asking "Why?" is just dumb? The premise of the question is flawed because the people who are using regex to parse HTML is such a small minority?

    Read the article

  • Finding databases for use in applications

    - by JonF
    Does anyone have some recommendations on how I can find databases for random things that I might want to use in my application. For example, a database of zip code locations, area code cities, car engines, IP address locations, or whatever. I'm just asking generally when you decide you need a bunch of data where are some good places to start looking other than google?

    Read the article

  • Finding relative libraries when using symlinks to ruby executables

    - by dgtized
    Imagine you have an executable foo.rb, with libraries bar.rb layed out in the following manner: <root>/bin/foo.rb <root>/lib/bar.rb In the header of foo.rb you place the following require to bring in functionality in bar.rb: require File.dirname(__FILE__)+"../lib/bar.rb" This works fine so long as all calls to foo.rb are direct. If you put as say $HOME/project, and symlink foo.rb into $HOME/usr/bin, then __FILE__ resolves to $HOME/usr/bin/foo.rb, and is thus unable to locate bar.rb in relation to the dirname for foo.rb. I realize that packaging systems such as rubygems fix this by creating a namespace to search for the library, and that it is also possible to adjust the load_path using $: to include $HOME/project/lib, but it seems as if a more simple solution should exist. Has anyone had experience with this problem and found a useful solution or recipe?

    Read the article

  • What is a mantainable way of saving "star rating" in a database?

    - by Montecristo
    I'll use the jQuery plugin for presenting the user with a nice interface The request is to display 5 stars, up to a total score of 10 (2 points per star). By now I thought about using 7/10 as a format for that value, but what if at some point in the future I'll receive a request like We would like to give users more choice, let's increase the total score to 20 (so that each star contributes with a maximum of 4 points) I'll end up with a table with mixed values for the "star rating" column: some will be like 7/10 while others will be like 14/20. Is it ok for you to have this difference in the database and deal with it in the logic layer to have it consistent? Or is preferred another way so that querying the table will not result in inconsistent results outside the application? Maybe floating point values could help me, is it better to store that value as a number less than or equal to one? So in each of the two examples the resulting value stored in the database would be 0,7, as a number, not a varchar, which can be queried also outside the application. What do you think?

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • The Wheel Invention - Beneficial For Learning?

    - by Sarfraz
    Hello, Chris Coyier of css-tricks.com has written a good article titled Regarding Wheel Invention. In a paragraph he says: On the “reinventing” side, you benefit from complete control and learning from the process. And on the very next line he says: On the other side, you benefit from speed, reliability, and familiarity. Also often at odds are time spent and cost. He is right in both statements I think. I really like his first statement. I do actually sometimes re-invent the wheel to learn more and gain complete control over what I am inventing. I wonder why people are so much against that or rather biased. Isn't there the benefit of learning and getting complete control or probably some other benefits too. I would love to see what you have to say about this.

    Read the article

  • using ref to view error

    - by Avram
    Hello. I working now on firm that using ref in every function. The reason, is to catch errors. There example : //return true if the read is success //otherwise writing to the error ,the problem bool ReadFile(ref string error) Question: How do you catching errors? Using ref,exceptions or other way?

    Read the article

  • package private static member class vs. package private class

    - by Helper Method
    I was writing two implementations of a linked list for an assignment, a doubly linked list and a circular doubly linked list. Now as the class representing a Link within the linked list is the same in both implementations, I want to use it in both. Now I wonder which approach would be better: Implement the Link class as a package private static member class in the first implementation and then use this class in the second implementation or make the Link class a package private class.

    Read the article

  • Where do I put the Current user query so as to not repeat per controller?

    - by Kevin
    I have a standard query that gets the current user object: @user = User.find_by_email(session[:email]) but I'm putting it as the first line in every single controller action which is obviously not the best way to do this. What is the best way to refactor this? Do I put this as a method in the Application controller (and if so, can you just show me a quick example)? Do I put the entire @user object into the session (has about 50 columns and some sensitive ones like is_admin)? Or is there another way to remove this kind of redundancy?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >