Search Results

Search found 1604 results on 65 pages for 'standards'.

Page 14/65 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How can HTML5 "replace" Flash?

    - by Kassini
    A topic of debate that's seen a resurgence since the unveiling of the iPad is the issue of Flash versus HTML5. There are those that suggest that HTML5 will one day supplant/replace Adobe Flash. I do not develop software that runs in a browser, so my (limited) understanding is: HTML is a pure-text markup language that is delivered over HTTP to a client browser. The client browser interprets the markup and renders (with varying degrees of success) the page according to an standard specification. Adobe Flash is a propriety framework for working with audio, video, sound and raster/vector graphics. It requires special authoring tools (a compiler perhaps?) and a custom player that's available as a plug-in to most common browsers. Could someone please explain (to this C/C++ developer) how it is possible from a technical/coding point-of-view that a text-based markup language (HTML5) could be considered a replacement to a multimedia framework (Flash)? Please no opinionated arguments - just technical facts.

    Read the article

  • Why does std::cout convert volatile pointers to bool?

    - by Joseph Garvin
    If you try to cout a volatile pointer, even a volatile char pointer where you would normally expect cout to print the string, you will instead simply get '1' (assuming the pointer is not null I think). I assume output stream operator<< is template specialized for volatile pointers, but my question is, why? What use case motivates this behavior? Example code: #include <iostream> #include <cstring> int main() { char x[500]; std::strcpy(x, "Hello world"); int y; int *z = &y; std::cout << x << std::endl; std::cout << (char volatile*)x << std::endl; std::cout << z << std::endl; std::cout << (int volatile*)z << std::endl; return 0; } Output: Hello world 1 0x8046b6c 1

    Read the article

  • Should I use dialog boxes in my web application?

    - by Tom
    I'm developing a typical web application with functions like add, remove, view, search and other yada yada. However, I'm uncertain how much I should rely on dialog boxes. Should I have a dialog box for adding information to the system or perhaps only as a confirmation when deleting something? I could also, for example, use a login dialog box instead of a login page. Should modern web sites be designed so that they use dialog boxes? Are there any general guidelines for when to use a dialog box in a web application or is it more "when I feel like it"?

    Read the article

  • Why would you use "AS" when aliasing a SQL table?

    - by froadie
    I just came across a SQL statement that uses AS to alias tables, like this: SELECT all, my, stuff FROM someTableName AS a INNER JOIN someOtherTableName AS b ON a.id = b.id What I'm used to seeing is: SELECT all, my, stuff FROM someTableName a INNER JOIN someOtherTableName b ON a.id = b.id I'm assuming there's no difference and it's just syntactic sugar, but which of these is more prevalent/wide-spread? Is there any reason to prefer one over the other?

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

  • Global qualification in a class declarations class-head

    - by gf
    We found something similar to the following (don't ask ...): namespace N { struct A { struct B; }; } struct A { struct B; }; using namespace N; struct ::A::B {}; // <- point of interest Interestingly, this compiles fine with VS2005, icc 11.1 and Comeau (online), but fails with GCC: global qualification of class name is invalid before '{' token From C++03, Annex A, it seems to me like GCC is right: the class-head can consist of nested-name-specifier and identifier nested-name-specifier can't begin with a global qualification (::) obviously, neither can identifier ... or am i overlooking something?

    Read the article

  • Can a destructor be recursive?

    - by Cubbi
    Is this program well-defined, and if not, why exactly? #include <iostream> #include <new> struct X { int cnt; X (int i) : cnt(i) {} ~X() { std::cout << "destructor called, cnt=" << cnt << std::endl; if ( cnt-- > 0 ) this->X::~X(); // explicit recursive call to dtor } }; int main() { char* buf = new char[sizeof(X)]; X* p = new(buf) X(7); p->X::~X(); // explicit call to dtor delete[] buf; } My reasoning: although invoking a destructor twice is undefined behavior, per 12.4/14, what it says exactly is this: the behavior is undefined if the destructor is invoked for an object whose lifetime has ended Which does not seem to prohibit recursive calls. While the destructor for an object is executing, the object's lifetime has not yet ended, thus it's not UB to invoke the destructor again. On the other hand, 12.4/6 says: After executing the body [...] a destructor for class X calls the destructors for X's direct members, the destructors for X's direct base classes [...] which means that after the return from a recursive invocation of a destructor, all member and base class destructors will have been called, and calling them again when returning to the previous level of recursion would be UB. Therefore, a class with no base and only POD members can have a recursive destructor without UB. Am I right?

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • Strict doctype - form and input element

    - by David
    Does anyone know the reasoning behind the strict doctype not allowing input elements to be direct descendents of a form element. I find it annoying that i have to wrap a submit button which is a block level element inside another block level element say a fieldset or a div. However, I cannot find an answer anywhere as to why this actually is.

    Read the article

  • How does the last integer promotion rule ever get applied in C?

    - by SiegeX
    6.3.1.8p1: Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands: If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. For the bolded rule to be applied it would seem to imply you need to have have an unsigned interger type who's rank is less than the signed integer type and the signed integer type cannot hold all the values of the unsigned integer type. Is there a real world example of such a case or is this statement serving as a catch-all to cover all possible permutations?

    Read the article

  • When is a C++ terminate handler the Right Thing(TM)?

    - by Joseph Garvin
    The C++ standard provides the std::set_terminate function which lets you specify what function std::terminate should actually call. std::terminate should only get called in dire circumstances, and sure enough the situations the standard describes for when it's called are dire (e.g. an uncaught exception). When std::terminate does get called the situation seems analagous to being out of memory -- there's not really much you can sensically do. I've read that it can be used to make sure resources are freed -- but for the majority of resources this should be handled automatically by the OS when the process exits (e.g. file handles). Theoretically I can see a case for if say, you needed to send a server a specific message when exiting due to a crash. But the majority of the time the OS handling should be sufficient. When is using a terminate handler the Right Thing(TM)? Update: People interested in what can be done with custom terminate handlers might find this non-portable trick useful.

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • Effects of the `extern` keyword on C functions

    - by Elazar Leibovich
    In C I did not notice any effect of the extern keyword used before function declaration. At first I thougth that when defining extern int f(); in a single file forces you to implement it outside of the files scope, however I found out that both extern int f(); int f() {return 0;} And extern int f() {return 0;} Compiles just fine, with no warnings from gcc. I used gcc -Wall -ansi, he wouldn't even accept // comments. Are there any effects for using extern before function definitions? Or is it just an optional keyword with no side effects for functions. In the latter case I don't understand why did the standard designers chose to litter the grammar with superfluous keywords. EDIT: to clarify, I know there's usage for extern in variables, but I'm only asking about extern in functions.

    Read the article

  • Return enum instead of bool from function for clarity ?

    - by Moe Sisko
    This is similar to : http://stackoverflow.com/questions/2908876/net-bool-vs-enum-as-a-method-parameter but concerns returning a bool from a function in some situations. e.g. Function which returns bool : public bool Poll() { bool isFinished = false; // do something, then determine if finished or not. return isFinished; } Used like this : while (!Poll()) { // do stuff during wait. } Its not obvious from the calling context what the bool returned from Poll() means. It might be clearer in some ways if the "Poll" function was renamed "IsFinished()", but the method does a bit of work, and (IMO) would not really reflect what the function actually does. Names like "IsFinished" also seem more appropriate for properties. Another option might be to rename it to something like : "PollAndReturnIsFinished" but this doesn't feel right either. So an option might be to return an enum. e.g : public enum Status { Running, Finished } public Status Poll() { Status status = Status.Running; // do something, then determine if finished or not. return status; } Called like this : while (Poll() == Status.Running) { // do stuff during wait. } But this feels like overkill. Any ideas ?

    Read the article

  • tips for fixing bad coding/dev habits ?

    - by dfafa
    i want to become a better coder....so i have decided to sign up for computing science program...maybe a formal education can assist me. i started working on smaller projects to learn but currently i have really bad coding/dev habits which is hindering my productivity as the codebase increases.... i have highlighted them and perhaps someone could make suggestions (or redirect to resources) or a more efficient method. most stuff that i made in the past were web apps. i usually develop with putty + nano...i just love the minimalist feel i use winscp and develop directly on my private web server...too lazy to do it on localhost and upload it later. i dont use subversion control...which one do i need ? sometimes ctrl +z doesn't work well. when i run out of ideas for naming variable, i use swear words instead. i swear a lot when i get stuck....how to deal with anger issue ? my codes look ugly with comments everywhere. would rather use procedural coding finds "thinking" in OO difficult and time consuming i "write first think later". refactors code only if i am getting paid for it. dislikes configuring linux distro, Apache, MySQL, scaling, designing graphics and layouts. does not like writing tests likes working alone. does not like sharing codes. has an econ degree dislikes reading other people's code would rather write it on my own it seems my only true desire is to translate my ideas to a working prototype as fast as possible....it seems like i am very uninterested in the other details...could it be that i am not cut out to be a coder after all ? is going back to study comp sci a bad idea ?

    Read the article

  • Is auto_ptr deprecated?

    - by idimba
    Is auto_ptr deprecated in incomming C++ standard? Is unique_ptr should be used for ownershipt transfer instead of share ptr? If unique_ptr is not in standard, than do I need use shared_ptr instead?

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >