Search Results

Search found 5422 results on 217 pages for 'coding convention'.

Page 26/217 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Good source for interview-style coding problems for entry/intermediate developers?

    - by soster
    I have taught myself to code over the past few years and do not have a computer science degree. As a result, I lack experience from many things, such as the basic homework/test questions many CS graduates take for granted. I recently had a tech screen interview where I fumbled and struggled to finish a (relatively) common question, I believe due to this inexperience. My question to all of you is this: do you know a good source for a bunch of these problems that includes answers, for an entry/intermediate developer who is trying to gain coding problem solving experience? The ones I've been able to find on the internet are for coding teams, so they're a bit too complicated for me. Thanks so much in advance.

    Read the article

  • Pass a Delphi class to a C++ function/method that expects a class with __thiscall methods.

    - by Alan G.
    I have some MSVC++ compiled DLL's for which I have created COM-like (lite) interfaces (abstract Delphi classes). Some of those classes have methods that need pointers to objects. These C++ methods are declared with the __thiscall calling convention (which I cannot change), which is just like __stdcall, except a this pointer is passed on the ECX register. I create the class instance in Delphi, then pass it on to the C++ method. I can set breakpoints in Delphi and see it hitting the exposed __stdcall methods in my Delphi class, but soon I get a STATUS_STACK_BUFFER_OVERRUN and the app has to exit. Is it possible to emulate/deal with __thiscall on the Delphi side of things? If I pass an object instantiated by the C++ system then all is good, and that object's methods are called (as would be expected), but this is useless - I need to pass Delphi objects. Edit 2010-04-19 18:12 This is what happens in more detail: The first method called (setLabel) exits with no error (though its a stub method). The second method called (init), enters then dies when it attempts to read the vol parameter. C++ Side #define SHAPES_EXPORT __declspec(dllexport) // just to show the value class SHAPES_EXPORT CBox { public: virtual ~CBox() {} virtual void init(double volume) = 0; virtual void grow(double amount) = 0; virtual void shrink(double amount) = 0; virtual void setID(int ID = 0) = 0; virtual void setLabel(const char* text) = 0; }; Delphi Side IBox = class public procedure destroyBox; virtual; stdcall; abstract; procedure init(vol: Double); virtual; stdcall; abstract; procedure grow(amount: Double); virtual; stdcall; abstract; procedure shrink(amount: Double); virtual; stdcall; abstract; procedure setID(val: Integer); virtual; stdcall; abstract; procedure setLabel(text: PChar); virtual; stdcall; abstract; end; TMyBox = class(IBox) protected FVolume: Double; FID: Integer; FLabel: String; // public constructor Create; destructor Destroy; override; // BEGIN Virtual Method implementation procedure destroyBox; override; stdcall; // empty - Dont need/want C++ to manage my Delphi objects, just call their methods procedure init(vol: Double); override; stdcall; // FVolume := vol; procedure grow(amount: Double); override; stdcall; // Inc(FVolume, amount); procedure shrink(amount: Double); override; stdcall; // Dec(FVolume, amount); procedure setID(val: Integer); override; stdcall; // FID := val; procedure setLabel(text: PChar); override; stdcall; // Stub method; empty. // END Virtual Method implementation property Volume: Double read FVolume; property ID: Integer read FID; property Label: String read FLabel; end; I would have half expected using stdcall alone to work, but something is messing up, not sure what, perhaps something to do with the ECX register being used? Help would be greatly appreciated. Edit 2010-04-19 17:42 Could it be that the ECX register needs to be preserved on entry and restored once the function exits? Is the this pointer required by C++? I'm probably just reaching at the moment based on some intense Google searches. I found something related, but it seems to be dealing with the reverse of this issue.

    Read the article

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

  • What is the convention for the star location in reference variables?

    - by Brett Ryan
    Have been learning Objective-C and different books and examples use differing conventions for the location of the star (*) when naming reference variables. MyType* x; MyType *y; MyType*z; // this also works Personally I prefer the first option as it illustrates that x is a "pointer type of MyType". I see the first two used interchangeably, and sometimes in the same code I've seen differing uses of both. I want to know what is the most common convention It's been a very long time since I've programmed in C (15 years) so I can't remember if all variants are legal for C also or if this is Objective-C specific. I'd prefer answers which state why one is better than the other, as how I explained how I read it above.

    Read the article

  • What's so bad about in-line CSS?

    - by ChessWhiz
    When I see website starter code and examples, the CSS is always in a separate file, named something like "main.css", "default.css", or "Site.css". However, when I'm coding up a page, I'm often tempted to throw the CSS in-line with a DOM element, such as by setting "float: right" on an image. I get the feeling that this is "bad coding", since it's so rarely done in examples. I understand that if the style will be applied to multiple objects, it's wise to follow "Don't Repeat Yourself" (DRY) and assign it to a CSS class to be referenced by each element. However, if I won't be repeating the CSS on another element, why not in-line the CSS as I write the HTML? The question: Is using in-line CSS considered bad, even if it will only be used on that element? If so, why? Example (is this bad?): <img src="myimage.gif" style="float:right" />

    Read the article

  • Checkstyle for C#?

    - by PSU_Kardi
    I'm looking to find something along the lines of Checkstyle for Visual Studio. I've recently started a new gig doing .NET work and realized that coding standards here are a bit lacking. While I'm still a young guy and far from the most experienced developer I'm trying to lead by example and get things going in the right direction. I loved the ability to use Checkstyle with Eclipse and examine code before reviews so I'd like to do the same thing with Visual Studio. Anyone have any good suggestions? Another thing I'd be somewhat interested in is a plug-in for SVN that disallows check-in until the main coding standards are met. I do not want people checking in busted code that's going to wind up in a code review. Any suggestions at this point would be great.

    Read the article

  • Is there anyone out there that codes like I do?

    - by Jacob Relkin
    Hi, Some people have told me that my coding style is a lot different than theirs. I think I am somewhat neurotic when it comes to spacing and indenting though. Here's a snippet to show you what I mean: - ( void ) applicationDidFinishLaunching: ( UIApplication *) application { SomeObject *object = [ [ SomeObject alloc ] init ]; int x = 100 / 5; object.someInstanceVariable = ( ( 4 * x ) + rand() ); [ object someMethod ]; } Notice how I space out all of my brackets/parentheses, start curly braces on the same line, "my code has room to breathe", so to speak. So my questions are a) is this normal and b) What's your coding style?

    Read the article

  • Do you put a super() call a the beginning of your constructors?

    - by sleske
    This is a question about coding style and recommended practices: As explained in the answers to the question unnecessary to put super() in constructor?, if you write a constructor for a class that is supposed to use the default (no-arg) constructor from the superclass, you may call super() at the beginning of your constructor: public MyClass(int parm){ super(); // leaving this out makes no difference // do stuff... } but you can also omit the call; the compiler will in both cases act as if the super() call were there. So then, do you put the call into your constructors or not? On the one hand, one might argue that including the super() makes things more explicit. OTOH, I always dislike writing redundant code, so personally I tend to leave it out; I do however regularly see it in code from others. What are your experiences? Did you have problems with one or the other approach? Do you have coding guidelines which prescribe one approach?

    Read the article

  • Is there anyone out there that codes like me?

    - by Jacob Relkin
    Hi, Some people have told me that my coding style is a lot different than theirs. I think I am somewhat neurotic when it comes to spacing and indenting though. Here's a snippet to show you what I mean: - ( void ) applicationDidFinishLaunching: ( UIApplication *) application { SomeObject *object = [ [ SomeObject alloc ] init ]; int x = 100 / 5; object.someInstanceVariable = ( ( 4 * x ) + rand() ); [ object someMethod ]; } Notice how I space out all of my brackets/parentheses, start curly braces on the same line, "my code has room to breathe", so to speak. So my questions are a) is this normal and b) What's your coding style?

    Read the article

  • Unexpected key-value behavior in a Core Data Context

    - by ????
    If I create an array of strings (via key-value coding) containing the names of a Managed Object entity's attributes which are stored in the App Delegate the first time, I get an array of NSStrings without any problems. If I subsequently make the same call later from the same entry point in code, that same collection becomes an array of NULL objects- even though nothing in the Core Data Context has changed. One unappealing work-around involves re-creating the string array every time, but I'm wondering if anyone has a guess as to what's happening behind the scenes. // Return an array of strings with the names of attributes the Activity entity - (NSArray *)activityAttributeNames { #pragma mark ALWAYS REFRESH THE ENTITY NAMES? //if (activityAttributeNames == nil) { // Create an entity pointer for Activity NSEntityDescription *entity = [NSEntityDescription entityForName:@"Activity" inManagedObjectContext:managedObjectContext]; NSArray *entityAttributeArray = [[NSArray alloc] initWithArray:[[entity attributesByName] allValues]]; // Extract the names of the attributes with Key-Value Coding activityAttributeNames = [entityAttributeArray valueForKeyPath:@"name"]; [entityAttributeArray release]; //} return activityAttributeNames; }

    Read the article

  • The 80 column limit, still useful?

    - by Tim Post
    Related: While coding, how many columns do you format for? Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? I mostly use C, however this question is language agnostic. Its also subjective, so I'll tag it as such. Many individual projects set their own various coding standards, a guide to adjust your coding style. Many enforce an 80 column limit on code, i.e. don't force a dumb 80 x 25 terminal to wrap your lines in someone else's editor of choice if they are stuck with such a display, don't force them to turn off wrapping. Both private and open source projects usually have some style guidelines. My question is, in this day and age, is that requirement more of a pest than a helper? Does anyone still login via the local console with no framebuffer and actually edit code? If so, how often and why cant you use SSH? I help to manage a few open source projects, I was considering extending this limit to 110 columns, but I wanted to get feedback first. So, any feedback is appreciated. I can see the need to make certain OUTPUT of programs (i.e. a --help /h display) 80 columns or less, but I really don't see the need to force people to break up code under 110 columns long into 2 lines, when its easier to read on one line. I can also see the case for adhering to an 80 column limit if you're writing code that will be used on micro controllers that have to be serviced in the field with a god-knows-what terminal emulator. Beyond that, what are your thoughts? Edit: This is not an exact duplicate. I am asking very specific questions, such as how many people are actually still using such a display. I am also not asking "what is a good column limit", I'm proposing one and hoping to gather feedback. Beyond that, I'm also citing cases where the 80 column limit is still a good idea. I don't want a guide to my own "c-style", I'm hoping to adjust standards for several projects. If the duplicate in question had answered all of my questions, I would not have posted this one :) That will teach me to mention it next time. Edit 2 question |= COMMUNITY_WIKI

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • How is return address specified in stack?

    - by Mask
    This is what I see by disassemble for the statement function(1,2,3);: movl $0x3,0x8(%esp) movl $0x2,0x4(%esp) movl $0x1,(%esp) call 0x4012d0 <_Z8functioniii> It seems the ret address is not pushed into stack at all,then how does ret work?

    Read the article

  • nasm/yasm arguments, linkage to C++

    - by arionik
    Hello everybody, I've got a question concerning nasm and its linkage to C++. I declare a litte test function as extern "C" void __cdecl myTest( byte i1, byte i2, int stride, int *width ); and I call it like this: byte i1 = 1, i2 = 2; int stride = 3, width = 4; myTest( i1, i2, stride, &width ); the method only serves to debug assembly and have a look at how the stack pointer is used to get the arguments. beyond that, the pointer arguments value shall be set to 7, to figure out how that works. This is implemented like this: global _myTest _myTest: mov eax, [esp+4] ; 1 mov ebx, [esp+8] ; 2 mov ecx, dword [esp+16] ; width mov edx, dword [esp+12] ; stride mov eax, dword [esp+16] mov dword [eax], 7 ret and compiled via yasm -f win32 -g cv8 -m x86 -o "$(IntDir)\$(InputName).obj" "$(InputPath)" , then linked to the c++ app. In debug mode, everything works fine. the function is called a couple of times and works as expected, whereas in release mode the function works once, but subsequent programm operations fail. It seems to me that something's wrong with stack/frame pointers, near/far, but I'm quite new to this subject and need a little help. thanks in advance! a.

    Read the article

  • Windows32 API: "mov edi,edi" on function entry?

    - by Ira Baxter
    I'm stepping through Structured Error Handling recovery code in Windows 7 (e.g, what happens after SEH handler is done and passes back "CONTINUE" code). Here's a function which is called: 7783BD9F mov edi,edi 7783BDA1 push ebp 7783BDA2 mov ebp,esp 7783BDA4 push 1 7783BDA6 push dword ptr [ebp+0Ch] 7783BDA9 push dword ptr [ebp+8] 7783BDAC call 778692DF 7783BDB1 pop ebp 7783BDB2 ret 8 I'm used to the function prolog of "push ebp/mov ebp,esp". What's the purpose of the "mov edi,edi"?

    Read the article

  • More about the Standard Entry Sequence

    - by Mask
    quoted from here: _function: push ebp ;store the old base pointer mov ebp, esp ;make the base pointer point to the current ;stack location – at the top of the stack is the ;old ebp, followed by the return address and then ;the parameters. sub esp, x ;x is the size, in bytes, of all ;"automatic variables" in the function What's stored in esp in the above code snippet?

    Read the article

  • JAVE function call form JSP page is not returning value

    - by Satyendra
    I am calling a java function from JSP page which returns file name after creationg a XML file. In some cases where size of file is large(Java function execution takes much time due to large data) it is returning blank where as XML file is genertaed after some time. Can any one help me to get the file name in this case so that user can know the generated file name.

    Read the article

  • C callback functions defined in an unnamed namespace?

    - by Johannes Schaub - litb
    Hi all. I have a C++ project that uses a C bison parser. The C parser uses a struct of function pointers to call functions that create proper AST nodes when productions are reduced by bison: typedef void Node; struct Actions { Node *(*newIntLit)(int val); Node *(*newAsgnExpr)(Node *left, Node *right); /* ... */ }; Now, in the C++ part of the project, i fill those pointers class AstNode { /* ... */ }; class IntLit : public AstNode { /* ... */ }; extern "C" { Node *newIntLit(int val) { return (Node*)new IntLit(val); } /* ... */ } Actions createActions() { Actions a; a.newIntLit = &newIntLit; /* ... */ return a; } Now the only reason i put them within extern "C" is because i want them to have C calling conventions. But optimally, i would like their names still be mangled. They are never called by-name from C code, so name mangling isn't an issue. Having them mangled will avoid name conflicts, since some actions are called like error, and the C++ callback function has ugly names like the following just to avoid name clashes with other modules. extern "C" { void uglyNameError(char const *str) { /* ... */ } /* ... */ } a.error = &uglyNameError; I wondered whether it could be possible by merely giving the function type C linkage extern "C" void fty(char const *str); namespace { fty error; /* Declared! But i can i define it with that type!? */ } Any ideas? I'm looking for Standard-C++ solutions.

    Read the article

  • Python: When passing variables between methods, is it necessary to assign it a new name?

    - by Anthony
    I'm thinking that the answer is probably 'no' if the program is small and there are a lot of methods, but what about in a larger program? If I am going to be using one variable in multiple methods throughout the program, is it smarter to: Come up with a different phrasing for each method (to eliminate naming conflicts). Use the same name for each method (to eliminate confusion) Just use a global variable (to eliminate both) This is more of a stylistic question than anything else. What naming convention do YOU use when passing variables?

    Read the article

  • What is on the 68000 stack when classic MacOS enters a program?

    - by John Källén
    I'm trying to understand an old classic Mac application's entry point. I've disassembled the first CODE resource (not CODE#0, which is the jump table). The code refers to some variables off the stack: a word at 0004(A7), an array of long words of starting at 000C(A7) whose length is the value at 0004(A7), and a final long word beyond that array that seems to be a pointer to a character string. The array of long words looks like strings at first glance, so it looks superficially like we're dealing with an (int argc, char ** argv) situation, except the "argv" array is inline in the stack frame. What should a program be expecting on its stack / registers when it first gets called by the Mac OS?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >