Search Results

Search found 2923 results on 117 pages for 'naming standards'.

Page 32/117 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Can a destructor be recursive?

    - by Cubbi
    Is this program well-defined, and if not, why exactly? #include <iostream> #include <new> struct X { int cnt; X (int i) : cnt(i) {} ~X() { std::cout << "destructor called, cnt=" << cnt << std::endl; if ( cnt-- > 0 ) this->X::~X(); // explicit recursive call to dtor } }; int main() { char* buf = new char[sizeof(X)]; X* p = new(buf) X(7); p->X::~X(); // explicit call to dtor delete[] buf; } My reasoning: although invoking a destructor twice is undefined behavior, per 12.4/14, what it says exactly is this: the behavior is undefined if the destructor is invoked for an object whose lifetime has ended Which does not seem to prohibit recursive calls. While the destructor for an object is executing, the object's lifetime has not yet ended, thus it's not UB to invoke the destructor again. On the other hand, 12.4/6 says: After executing the body [...] a destructor for class X calls the destructors for X's direct members, the destructors for X's direct base classes [...] which means that after the return from a recursive invocation of a destructor, all member and base class destructors will have been called, and calling them again when returning to the previous level of recursion would be UB. Therefore, a class with no base and only POD members can have a recursive destructor without UB. Am I right?

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • Strict doctype - form and input element

    - by David
    Does anyone know the reasoning behind the strict doctype not allowing input elements to be direct descendents of a form element. I find it annoying that i have to wrap a submit button which is a block level element inside another block level element say a fieldset or a div. However, I cannot find an answer anywhere as to why this actually is.

    Read the article

  • How does the last integer promotion rule ever get applied in C?

    - by SiegeX
    6.3.1.8p1: Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands: If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. For the bolded rule to be applied it would seem to imply you need to have have an unsigned interger type who's rank is less than the signed integer type and the signed integer type cannot hold all the values of the unsigned integer type. Is there a real world example of such a case or is this statement serving as a catch-all to cover all possible permutations?

    Read the article

  • When is a C++ terminate handler the Right Thing(TM)?

    - by Joseph Garvin
    The C++ standard provides the std::set_terminate function which lets you specify what function std::terminate should actually call. std::terminate should only get called in dire circumstances, and sure enough the situations the standard describes for when it's called are dire (e.g. an uncaught exception). When std::terminate does get called the situation seems analagous to being out of memory -- there's not really much you can sensically do. I've read that it can be used to make sure resources are freed -- but for the majority of resources this should be handled automatically by the OS when the process exits (e.g. file handles). Theoretically I can see a case for if say, you needed to send a server a specific message when exiting due to a crash. But the majority of the time the OS handling should be sufficient. When is using a terminate handler the Right Thing(TM)? Update: People interested in what can be done with custom terminate handlers might find this non-portable trick useful.

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • Effects of the `extern` keyword on C functions

    - by Elazar Leibovich
    In C I did not notice any effect of the extern keyword used before function declaration. At first I thougth that when defining extern int f(); in a single file forces you to implement it outside of the files scope, however I found out that both extern int f(); int f() {return 0;} And extern int f() {return 0;} Compiles just fine, with no warnings from gcc. I used gcc -Wall -ansi, he wouldn't even accept // comments. Are there any effects for using extern before function definitions? Or is it just an optional keyword with no side effects for functions. In the latter case I don't understand why did the standard designers chose to litter the grammar with superfluous keywords. EDIT: to clarify, I know there's usage for extern in variables, but I'm only asking about extern in functions.

    Read the article

  • Return enum instead of bool from function for clarity ?

    - by Moe Sisko
    This is similar to : http://stackoverflow.com/questions/2908876/net-bool-vs-enum-as-a-method-parameter but concerns returning a bool from a function in some situations. e.g. Function which returns bool : public bool Poll() { bool isFinished = false; // do something, then determine if finished or not. return isFinished; } Used like this : while (!Poll()) { // do stuff during wait. } Its not obvious from the calling context what the bool returned from Poll() means. It might be clearer in some ways if the "Poll" function was renamed "IsFinished()", but the method does a bit of work, and (IMO) would not really reflect what the function actually does. Names like "IsFinished" also seem more appropriate for properties. Another option might be to rename it to something like : "PollAndReturnIsFinished" but this doesn't feel right either. So an option might be to return an enum. e.g : public enum Status { Running, Finished } public Status Poll() { Status status = Status.Running; // do something, then determine if finished or not. return status; } Called like this : while (Poll() == Status.Running) { // do stuff during wait. } But this feels like overkill. Any ideas ?

    Read the article

  • Is auto_ptr deprecated?

    - by idimba
    Is auto_ptr deprecated in incomming C++ standard? Is unique_ptr should be used for ownershipt transfer instead of share ptr? If unique_ptr is not in standard, than do I need use shared_ptr instead?

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • Should I use fork or threads?

    - by shadyabhi
    In my script, I have a function foo which basically uses pynotify to notify user about something repeatedly after a time interval say 15 minutes. def foo: while True: """Does something""" time.sleep(900) My main script has to interact with user & does all other things so I just cant call the foo() function. directly. Whats the better way of doing it and why? Using fork or threads?

    Read the article

  • Multiple CSS Classes: Properties Overlapping based on the order defined.

    - by Jian Lin
    Is there a rule in CSS that determines the cascading order when multiple classes are defined on an element? (class="one two" vs class="two one") Right now, there seems to be no such effect. Example: both divs are orange in color on Firefox <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <style> .one { border: 6px dashed green } .two { border: 6px dashed orange } </style> </head> <body> <div class="one two"> hello world </div> <div class="two one"> hello world </div>

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • Does anyone change the Visual Studio default bracing style in C# - Is there a standard?

    - by El Ronnoco
    I find the default bracing style a bit wasteful on line count eg... function foo() { if (...) { ... } else { ... } } would, if I was writing in JavaScript for example be written like... function foo() { if (...) { ... } else { ... } } ...which I understand may also not be to peoples' tastes. But the question(s) is/are do you turn off the VS formatting style and use your own rules? What is the opinion of this in the industry when many people are working on the same code-base? Is it better just to stick to the default just for simplicity/uniformity?

    Read the article

  • CSS3 box-shadow + inset + RGBA

    - by mkotechno
    I'm doing some tests with new features of CSS3, but this combination only works in lastest versions of Chrome and Firefox, but not in Safari or Opera: box-shadow: inset 0px -10px 20px 0px rgba(0, 0, 0, 0.5); -webkit-box-shadow: inset 0px -10px 20px 0px rgba(0, 0, 0, 0.5); -moz-box-shadow: inset 0px -10px 20px 0px rgba(0, 0, 0, 0.5); I really don't know if they fails in the box-shadow itself, in the inset parameter, or in RGBA color. It's a syntax error or simply Safari and Opera lacks on this?

    Read the article

  • What are the characteristics of spaghetti code?

    - by justjoe
    Somebody said that when your PHP Code and application use global variable then it must be a spaghetti code (i assume this). I use wordpress a lot. As far as i know, it's the best thing near a great php software. And it use many global variables to interact between its components. but forget about that, cause frankly, that's the only thing i know. so it's completely biased ;D so just curious, What is the characteristic of spaghetti code ? ps : the only thing i know is wordpress. so, Hopefully, maybe this will help somebody give great answer for somebody who have a little experience on developing full web application on PHP (for example, stack-overflow website)

    Read the article

  • SOAP - Why do I need to query for the values for an update?

    - by Phill Pafford
    I'm taking over a project and wanted to understand if this is common practice using SOAP. The process that is currently in place I have to query all the values before I do an update cause I need to pass back all the values that are not being updated. Does this sound right? Example Values: fname=phill lname=pafford address=123 main phone:222-555-1212 So if I just wanted to update the phone number I need to query for the record, get all the values and submit these values for an update. Example Update Values: fname=phill lname=pafford address=123 main phone:111-555-1212 I just want to know if this is common practice or should I change the functionality of this?

    Read the article

  • why those chinese indent code so differently?

    - by winston
    currently i am working with some programmer from shanghai i notice they have some coding indentation like these: if(1==1 && 2==2) { a = 3; } else { b = 4; } however i am accustomed to: if (1==1 && 2==2) { a = 3; } else { b = 4; } what do you think? how could i get rid of different coding styles within a single program file?

    Read the article

  • How are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • Which browsers support font embedding.

    - by jonhobbs
    I've been reading about the @font-face rule and trying to work out if it's worth using it in a project to render "franklin gothic medium" for title instead of something like sIfr. I figured that for browsers that don't support it I could make it fall back on Arial. The thing is that I'm having trouble getting a definitive answer about which browsers support embedding fonts in this way. So far I've worked out the IE does, but doesn't support .ttf files. Other browsers I'm not sure. If anyone could point me towards some kinf of compatibility chart that would be great. Jon

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >