Search Results

Search found 5930 results on 238 pages for 'coding standards'.

Page 32/238 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Where should JavaScript be put?

    - by NessDan
    I've been doing a little JavaScript (well, more like jQuery) for a while now and one thing I've always been confused about is where I should put my scripts, in the <head> tag or in the <body> tag. If anyone could clarify this issue, that'd be great. An example of what should go where would be perfect.

    Read the article

  • Creative ways to punish (or just curb) laziness in coworkers

    - by FerretallicA
    Like the subject suggests, what are some creative ways to curb laziness in co-workers? By laziness I'm talking about things like using variable names like "inttheemplrcd" instead of "intEmployerCode" or not keeping their projects synced with SVN, not just people who use the last of the sugar in the coffee room and don't refill the jar. So far the two most effective things I've done both involve the core library my company uses. Since most of our programs are in VB.net the lack of case sensitivity is abused a lot. I've got certain features of the library using Reflection to access data in the client apps, which has a negligible performance hit and introduces case sensitivity in a lot places where it is used. In instances where we have an agreed standard which is compromised by blatant laziness I take it a step further, like the DatabaseController class which will blatantly reject any DataTable passed to it which isn't named dtSomething (ie- must begin with dt and third letter must be capitalised). It's frustrating to have to resort to things like this but it has also gradually helped drill more attention to detail into their heads. Another is adding some code to the library's initialisation function to display a big and potentially embarrassing (only if seen by a client) message advising that the program is running in debug mode. We have had many instances where projects are sent to clients built in debug mode which has a lot of implications for us (especially with regard to error recovery) and doing that has made sure they always build to release before distributing. Any other creative (ie- not StyleCop etc) approaches like this?

    Read the article

  • Are colons allowed URIs?

    - by Emanuil
    I thought using colons in URIs was "illegal". Then I saw that vimeo.com is using URIs like http://www.vimeo.com/tag:sample. What do you feel about the usage of colons in URIs? How do I make my Apache server work with the "colon" syntax because now it's throwing the "Access forbidden!" error when there is a colon in the first segment of the URI?

    Read the article

  • When does invoking a member function on a null instance result in undefined behavior?

    - by GMan
    This question arose in the comments of a now-deleted answer to this other question. Our question was asked in the comments by STingRaySC as: Where exactly do we invoke UB? Is it calling a member function through an invalid pointer? Or is it calling a member function that accesses member data through an invalid pointer? With the answer deleted I figured we might as well make it it's own question. Consider the following code: #include <iostream> struct foo { void bar(void) { std::cout << "gman was here" << std::endl; } void baz(void) { x = 5; } int x; }; int main(void) { foo* f = 0; f->bar(); // (a) f->baz(); // (b) } We expect (b) to crash, because there is no corresponding member x for the null pointer. In practice, (a) doesn't crash because the this pointer is never used. Because (b) dereferences the this pointer (this->x = 5;), and this is null, the program enters undefined behavior. Does (a) result in undefined behavior? What about if both functions are static?

    Read the article

  • Why does std::cout convert volatile pointers to bool?

    - by Joseph Garvin
    If you try to cout a volatile pointer, even a volatile char pointer where you would normally expect cout to print the string, you will instead simply get '1' (assuming the pointer is not null I think). I assume output stream operator<< is template specialized for volatile pointers, but my question is, why? What use case motivates this behavior? Example code: #include <iostream> #include <cstring> int main() { char x[500]; std::strcpy(x, "Hello world"); int y; int *z = &y; std::cout << x << std::endl; std::cout << (char volatile*)x << std::endl; std::cout << z << std::endl; std::cout << (int volatile*)z << std::endl; return 0; } Output: Hello world 1 0x8046b6c 1

    Read the article

  • Should I use dialog boxes in my web application?

    - by Tom
    I'm developing a typical web application with functions like add, remove, view, search and other yada yada. However, I'm uncertain how much I should rely on dialog boxes. Should I have a dialog box for adding information to the system or perhaps only as a confirmation when deleting something? I could also, for example, use a login dialog box instead of a login page. Should modern web sites be designed so that they use dialog boxes? Are there any general guidelines for when to use a dialog box in a web application or is it more "when I feel like it"?

    Read the article

  • Why would you use "AS" when aliasing a SQL table?

    - by froadie
    I just came across a SQL statement that uses AS to alias tables, like this: SELECT all, my, stuff FROM someTableName AS a INNER JOIN someOtherTableName AS b ON a.id = b.id What I'm used to seeing is: SELECT all, my, stuff FROM someTableName a INNER JOIN someOtherTableName b ON a.id = b.id I'm assuming there's no difference and it's just syntactic sugar, but which of these is more prevalent/wide-spread? Is there any reason to prefer one over the other?

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

  • Global qualification in a class declarations class-head

    - by gf
    We found something similar to the following (don't ask ...): namespace N { struct A { struct B; }; } struct A { struct B; }; using namespace N; struct ::A::B {}; // <- point of interest Interestingly, this compiles fine with VS2005, icc 11.1 and Comeau (online), but fails with GCC: global qualification of class name is invalid before '{' token From C++03, Annex A, it seems to me like GCC is right: the class-head can consist of nested-name-specifier and identifier nested-name-specifier can't begin with a global qualification (::) obviously, neither can identifier ... or am i overlooking something?

    Read the article

  • What's the best way to transition to MVC coding?

    - by ggfan
    It's been around 5 months since I picked up a PHP book and started coding in PHP. At first, I created all my sites without any organizational plan or MVC. I soon found out that was a pain.. Then I started to read on stackoverflow on how to separate php and html and that's what I have been doing ever since. Ex: profile.php <--this file is HTML,css. I just echo the functions here. profile_functions.php <--this file is mostly PHP. has the functions. This is how I have been separating all my coding so far and now I feel I should move on and start MVC. But the problem is, I never used classes before and suck with them. And since MVC (such as cakephp and codeigniter) is all classes, that can't be good. My question: Is there any good books/sites/articles that teaches you how to code in MVC? I am looking for beginner beginner books :) I just started reading the codeigniter manuel and I think I am going to use that. When I looked at the example MVC, they use different PHP coding. When I start coding in MVC, would I have to learn a "new" way to code? Because right now I am coding in pure basic PHP.

    Read the article

  • Can a destructor be recursive?

    - by Cubbi
    Is this program well-defined, and if not, why exactly? #include <iostream> #include <new> struct X { int cnt; X (int i) : cnt(i) {} ~X() { std::cout << "destructor called, cnt=" << cnt << std::endl; if ( cnt-- > 0 ) this->X::~X(); // explicit recursive call to dtor } }; int main() { char* buf = new char[sizeof(X)]; X* p = new(buf) X(7); p->X::~X(); // explicit call to dtor delete[] buf; } My reasoning: although invoking a destructor twice is undefined behavior, per 12.4/14, what it says exactly is this: the behavior is undefined if the destructor is invoked for an object whose lifetime has ended Which does not seem to prohibit recursive calls. While the destructor for an object is executing, the object's lifetime has not yet ended, thus it's not UB to invoke the destructor again. On the other hand, 12.4/6 says: After executing the body [...] a destructor for class X calls the destructors for X's direct members, the destructors for X's direct base classes [...] which means that after the return from a recursive invocation of a destructor, all member and base class destructors will have been called, and calling them again when returning to the previous level of recursion would be UB. Therefore, a class with no base and only POD members can have a recursive destructor without UB. Am I right?

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • Strict doctype - form and input element

    - by David
    Does anyone know the reasoning behind the strict doctype not allowing input elements to be direct descendents of a form element. I find it annoying that i have to wrap a submit button which is a block level element inside another block level element say a fieldset or a div. However, I cannot find an answer anywhere as to why this actually is.

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • When is a C++ terminate handler the Right Thing(TM)?

    - by Joseph Garvin
    The C++ standard provides the std::set_terminate function which lets you specify what function std::terminate should actually call. std::terminate should only get called in dire circumstances, and sure enough the situations the standard describes for when it's called are dire (e.g. an uncaught exception). When std::terminate does get called the situation seems analagous to being out of memory -- there's not really much you can sensically do. I've read that it can be used to make sure resources are freed -- but for the majority of resources this should be handled automatically by the OS when the process exits (e.g. file handles). Theoretically I can see a case for if say, you needed to send a server a specific message when exiting due to a crash. But the majority of the time the OS handling should be sufficient. When is using a terminate handler the Right Thing(TM)? Update: People interested in what can be done with custom terminate handlers might find this non-portable trick useful.

    Read the article

  • How does the last integer promotion rule ever get applied in C?

    - by SiegeX
    6.3.1.8p1: Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands: If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. For the bolded rule to be applied it would seem to imply you need to have have an unsigned interger type who's rank is less than the signed integer type and the signed integer type cannot hold all the values of the unsigned integer type. Is there a real world example of such a case or is this statement serving as a catch-all to cover all possible permutations?

    Read the article

  • Effects of the `extern` keyword on C functions

    - by Elazar Leibovich
    In C I did not notice any effect of the extern keyword used before function declaration. At first I thougth that when defining extern int f(); in a single file forces you to implement it outside of the files scope, however I found out that both extern int f(); int f() {return 0;} And extern int f() {return 0;} Compiles just fine, with no warnings from gcc. I used gcc -Wall -ansi, he wouldn't even accept // comments. Are there any effects for using extern before function definitions? Or is it just an optional keyword with no side effects for functions. In the latter case I don't understand why did the standard designers chose to litter the grammar with superfluous keywords. EDIT: to clarify, I know there's usage for extern in variables, but I'm only asking about extern in functions.

    Read the article

  • Return enum instead of bool from function for clarity ?

    - by Moe Sisko
    This is similar to : http://stackoverflow.com/questions/2908876/net-bool-vs-enum-as-a-method-parameter but concerns returning a bool from a function in some situations. e.g. Function which returns bool : public bool Poll() { bool isFinished = false; // do something, then determine if finished or not. return isFinished; } Used like this : while (!Poll()) { // do stuff during wait. } Its not obvious from the calling context what the bool returned from Poll() means. It might be clearer in some ways if the "Poll" function was renamed "IsFinished()", but the method does a bit of work, and (IMO) would not really reflect what the function actually does. Names like "IsFinished" also seem more appropriate for properties. Another option might be to rename it to something like : "PollAndReturnIsFinished" but this doesn't feel right either. So an option might be to return an enum. e.g : public enum Status { Running, Finished } public Status Poll() { Status status = Status.Running; // do something, then determine if finished or not. return status; } Called like this : while (Poll() == Status.Running) { // do stuff during wait. } But this feels like overkill. Any ideas ?

    Read the article

  • Is auto_ptr deprecated?

    - by idimba
    Is auto_ptr deprecated in incomming C++ standard? Is unique_ptr should be used for ownershipt transfer instead of share ptr? If unique_ptr is not in standard, than do I need use shared_ptr instead?

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >