Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 183/348 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Fastest way to check for value existance.

    - by Itay Moav
    I have a list of values I have to check my input against it for existence. What is the faster way? This is really out of curiosity on how the internals work, not any stuff about premature optimization etc... 1. $x=array('v'=>'','c'=>'','w'=>); .. .. array_key_exists($input,$x); 2. $x=array('v','c','w'); .. .. in_array($input,$x);

    Read the article

  • Is it always bad idea to use inline css for used-once property?

    - by user93422
    I have a table, with 10 columns. I want to control the width of each column. Each column is unique, right now I create an external CSS style for each column: div#my-page table#members th.name-col { width: 40px; } I know there is a best practice to avoid inline style. I do approve using external CSS for anything look'n'feel related: fonts, colors, images. But is it really better to use external CSS in this case? It does not incur extra maintenance cost. It is easier to produce. Cons I can think of: If you have separate designers and development team - using inline styles will force designers to modify content-file (aspx in my case). It might use more bandwidth. Anything else I've missed?

    Read the article

  • One database or many?

    - by dsims
    I am developing a website that will manage data for multiple entities. No data is shared between entities, but they may be owned by the same customer. A customer may want to manage all their entities from a single "dashboard". So should I have one database for everything, or keep the data seperated into individual databases? Is there a best-practice? What are the positives/negatives for having a: database for the entire site (entity has a "customerID", data has "entityID") database for each customer (data has "entityID") database for each entity (relation of database to customer is outside of database) Multiple databases seems like it would have better performance (fewer rows and joins) but may eventually become a maintenance nightmare.

    Read the article

  • Imposing email limits on web page

    - by Martin
    To avoid spammers, what's a good strategy for imposing limits on users when sending email from our site? A count limit per day on individual IPs? Sender emails? Domains? In general terms, but recommended figures will also be helpful. Our users can send emails through our web page. They can register and log in but are also allowed to do this without logging in, but with a captcha and with a field for the senders email. Certainly, there is a header, "The user has sent you the following message.", limiting the use for spammers, so perhaps it's not a big problem. Any comments on what I'm doing will be greatly appreciated.

    Read the article

  • Avoiding repetition with libraries that use a setup + execute model

    - by lijie
    Some libraries offer the ability to separate setup and execution, esp if the setup portion has undesirable characteristics such as unbounded latency. If the program needs to have this reflected in its structure, then it is natural to have: void setupXXX(...); // which calls the setup stuff void doXXX(...); // which calls the execute stuff The problem with this is that the structure of setupXXX and doXXX is going to be quite similar (at least textually -- control flow will prob be more complex in doXXX). Wondering if there are any ways to avoid this. Example: Let's say we're doing signal processing: filtering with a known kernel in the frequency domain. so, setupXXX and doXXX would probably be something like... void doFilter(FilterStuff *c) { for (int i = 0; i < c->N; ++i) { doFFT(c->x[i], c->fft_forward_setup, c->tmp); doMultiplyVector(c->tmp, c->filter); doFFT(c->tmp, c->fft_inverse_setup, c->x[i]); } } void setupFilter(FilterStuff *c) { setupFFT(..., &(c->fft_forward_setup)); // assign the kernel to c->filter ... setupFFT(..., &(c->fft_inverse_setup)); }

    Read the article

  • Any reason to clean up unused imports in Java, other than reducing clutter?

    - by Kip
    Is there any good reason to avoid unused import statements in Java? As I understand it, they are there for the compiler, so lots of unused imports won't have any impacts on the compiled code. Is it just to reduce clutter and to avoid naming conflicts down the line? (I ask because Eclipse gives a warning about unused imports, which is kind of annoying when I'm developing code because I don't want to remove the imports until I'm pretty sure I'm done designing the class.)

    Read the article

  • Should I go back and fix work when you learn something new/better?

    - by SnOrfus
    Considering that we're all constantly learning, we've all got to come across a point where we learn something just awesome that improves our code or parts of it significantly. The question is, when you've learned some new technique, strategy or whatever, do your or should you go back to code that you know works, but could be so much better/maintainable/faster/generally improved and implement this new knowledge? I understand the concept of "if it ain't broke, don't fix it" but when does that become losing pride in code you've already written and what does it say for refactoring.

    Read the article

  • How do you like to define your module-wide variables in drupal 6?

    - by sprugman
    I'm in my module file. I want to define some complex variables for use throughout the module. For simple things, I'm doing this: function mymodule_init() { define('SOME_CONSTANT', 'foo bar'); } But that won't work for more complex structures. Here are some ideas that I've thought of: global: function mymodule_init() { $GLOBALS['mymodule_var'] = array('foo' => 'bar'); } variable_set: function mymodule_init() { variable_set('mymodule_var', array('foo' => 'bar')); } property of a module class: class MyModule { static $var = array('foo' => 'bar'); } Variable_set/_get seems like the most "drupal" way, but I'm drawn toward the class setup. Are there any drawbacks to that? Any other approaches out there?

    Read the article

  • Do preconditions ALWAYS have to be checked?

    - by Pin
    These days I'm used to checking every single precondition for every function since I got the habit from an OS programming course back at uni. On the other hand, at the software engineering course we were taught that a common precondition should only be checked once, so for example, if a function is delegating to another function, the first function should check them but checking them again in the second one is redundant. I do see the redundancy point, but I certainly feel it's safer to always check them, plus you don't have to keep track of where they were checked previously. What's the best practice here?

    Read the article

  • Should I put a try-finally block after every Object.Create?

    - by max
    I have a general question about best practice in OO Delphi. Currently, I put try-finally blocks anywhere I create an object to free that object after usage (to avoid memory leaks). E.g.: aObject := TObject.Create; try aOBject.AProcedure(); ... finally aObject.Free; end; instead of: aObject := TObject.Create; aObject.AProcedure(); .. aObject.Free; Do you think it is good practice, or too much overhead? And what about the performance?

    Read the article

  • Abstract away a compound identity value for use in business logic?

    - by John K
    While separating business logic and data access logic into two different assemblies, I want to abstract away the concept of identity so that the business logic deals with one consistent identity without having to understand its actual representation in the data source. I've been calling this a compound identity abstraction. Data sources in this project are swappable and various and the business logic shouldn't care which data source is currently in use. The identity is the toughest part because its implementation can change per kind of data source, whereas other fields like name, address, etc are consistently scalar values. What I'm searching for is a good way to abstract the concept of identity, whether it be an existing library, a software pattern or just a solid good idea of some kind is provided. The proposed compound identity value would have to be comparable and usable in the business logic and passed back to the data source to specify records, entities and/or documents to affect, so the data source must be able to parse back out the details of its own compound ids. Data Source Examples: This serves to provide an idea of what I mean by various data sources having different identity implementations. A relational data source might express a piece of content with an integer identifier plus a language specific code. For example. content_id language Other Columns expressing details of content 1 en_us 1 fr_ca The identity of the first record in the above example is: 1 + en_us However when a NoSQL data source is substituted, it might somehow represent each piece of content with a GUID string 936DA01F-9ABD-4d9d-80C7-02AF85C822A8 plus language code of a different standardization, And a third kind of data source might use just a simple scalar value. So on and so forth, you get the idea.

    Read the article

  • Implementing search functionality with multiple optional parameters against database table.

    - by quarkX
    Hello, I would like to check if there is a preferred design pattern for implementing search functionality with multiple optional parameters against database table where the access to the database should be only via stored procedures. The targeted platform is .Net with SQL 2005, 2008 backend, but I think this is pretty generic problem. For example, we have customer table and we want to provide search functionality to the UI for different parameters, like customer Type, customer State, customer Zip, etc., and all of them are optional and can be selected in any combinations. In other words, the user can search by customerType only or by customerType, customerZIp or any other possible combinations. There are several available design approaches, but all of them have some disadvantages and I would like to ask if there is a preferred design among them or if there is another approach. Generate sql where clause sql statement dynamically in the business tier, based on the search request from the UI, and pass it to a stored procedure as parameter. Something like @Where = ‘where CustomerZip = 111111’ Inside the stored procedure generate dynamic sql statement and execute it with sp_executesql. Disadvantage: dynamic sql, sql injection Implement a stored procedure with multiple input parameters, representing the search fields from the UI, and use the following construction for selecting the records only for the requested fields in the where statement. WHERE (CustomerType = @CustomerType OR @CustomerType is null ) AND (CustomerZip = @CustomerZip OR @CustomerZip is null ) AND ………………………………………… Disadvantage: possible performance issue for the sql. 3.Implement separate stored procedure for each search parameter combinations. Disadvantage: The number of stored procedures will increase rapidly with the increase of the search parameters, repeated code.

    Read the article

  • Using try vs if in python

    - by artdanil
    Is there a rationale to decide which one of try or if constructs to use, when testing variable to have a value? For example, there is a function that returns either a list or doesn't return a value. I want to check result before processing it. Which of the following would be more preferable and why? result = function(); if (result): for r in result: #process items or result = function(); try: for r in result: #process items except TypeError: pass; Related discussion: Checking for member existence in Python

    Read the article

  • incapsulation of a code inmatlab

    - by user531225
    my code is pathname=uigetdir; filename=uigetfile('*.txt','choose a file name.'); data=importdata(filename); element= (data.data(:,10)); in_array=element; pattern= [1 3]; locations = cell(1, numel(pattern)); for p = 1:(numel(pattern)) locations{p} = find(in_array == pattern(p)); end idx2 = []; for p = 1:numel(locations{1}) start_value = locations{1}(p); for q = 2:numel(locations) found = true; if (~any((start_value + q - 1) == locations{q})) found = false; break; end end if (found) idx2(end + 1) = locations{1}(p); end end [m2,n2]=size(idx2) res_name= {'one' 'two'}; res=[n n2]; In this code I finding a pattern in one of the column of my data file and counting how many times it's repeated. I have like 200 files that I want to do the same with them but unfotunatlly I'm stuck. this is what I have added so far pathname=uigetdir; files=dir('*.txt'); for k=1:length(files) filename=files(k).name; data(k)=importdata(files(k).name); element{k}=data(1,k).data(:,20); in_array=element;pattern= [1 3]; locations = cell(1, numel(pattern)); for p = 1:(numel(pattern)) locations{p} = find(in_array{k}== pattern(p)); end idx2{k} = []; how can I continue this code..??

    Read the article

  • AS3: Performance question calling an event function with null param

    - by adehaas
    Lately I needed to call a listener function without an actual listener like so: foo(null); private function foo(event:Event):void { //do something } So I was wondering if there is a significant difference regarding performance between this and using the following, in which I can prevent the null in calling the function without the listener, but am still able to call it with a listener as well: foo(); private function foo(event:Event = null):void { } I am not sure wether it is just a question of style, or actually bad practice and I should write two similar functions, one with and one without the event param (which seems cumbersome to me). Looking forward to your opinions, thx.

    Read the article

  • Tests that are 2-3 times bigger than the testable code

    - by HeavyWave
    Is it normal to have tests that are way bigger than the actual code being tested? For every line of code I am testing I usually have 2-3 lines in the unit test. Which ultimately leads to tons of time being spent just typing the tests in (mock, mock and mock more). Where are the time savings? Do you ever avoid tests for code that is along the lines of being trivial? Most of my methods are less than 10 lines long and testing each one of them takes a lot of time, to the point where, as you see, I start questioning writing most of the tests in the first place. I am not advocating not unit testing, I like it. Just want to see what factors people consider before writing tests. They come at a cost (in terms of time, hence money), so this cost must be evaluated somehow. How do you estimate the savings created by your unit tests, if ever?

    Read the article

  • Is it acceptable to design my GLSurfaceView as a main control class?

    - by Omega
    I'm trying to structure a game I'm making in Android so that I have a sound, flexible design. Right now I'm looking at where I can tie my games rules engine and graphics engine together and what should be in between them. At a glance, I've been eying my implementation of GLSurfaceView, where various screen events are captured. My rationale would be to create an instance of my game engine and graphics engine here and receive events and state changes to trigger updates of either where applicable. Further to this, in the future, the GLSurfaceView implementation could also store stubs for players during a network game and implementations of computer opponents and dispatch them appropriately. Does this seem like a sensible design? Are there any kinds of improvements I can make? Thanks for any input!

    Read the article

  • What is the right way to implement communication between java objects?

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes, each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >