Search Results

Search found 15051 results on 603 pages for 'meta programming'.

Page 94/603 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • How can I optimize my development machines files/dirs?

    - by LuxuryMode
    Like any programmer, I've got a lot of stuff on my machine. Some of that stuff is projects of my own, some are projects I'm working on for my employer, others are open-source tools and projects, etc. Currently, I have my files organized as follows: /Code --/development (things I'm sort of hacking on plus maybe libraries used in other projects) --/scala (organized by language...why? I don't know!) --/android --/ruby --/employer_name -- /mobile --/android --/ios --/open-source (basically my forks that I'm pushing commits back upstream from) --/some-awesome-oss-project --/another-awesome-one --/tools random IDE settings sprinkled in here plus some other apps As you can see, things are kind of a mess here. How can I keep things organized in some sort of coherent fashion?

    Read the article

  • What is the meaning of 'high cohesion'?

    - by Max
    I am a student who recently joined a software development company as an intern. Back at the university, one of my professors used to say that we have to strive to achieve "Low coupling and high cohesion". I understand the meaning of low coupling. It means to keep the code of separate components separately, so that a change in one place does not break the code in another. But what is meant by high cohesion. If it means integrating the various pieces of the same component well with each other, I dont understand how that becomes advantageous. What is meant by high cohesion? Can an example be explained to understand its benefits?

    Read the article

  • undefined control sequence in a NOWEB document

    - by Jean Baldraque
    I'm writing a TeX-noweb document. I compile it with noweave -tex -filter "elide comment:*" texcode.nw > documentation.tex but when I try to compile the resulting file with xetex -halt-on-error documentation.tex I obtain the following error message ! Undefined control sequence. <argument> ...on}\endmoddef \nwstartdeflinemarkup \nwenddeflinemarkup It seems that \nwenddeflinemarkup is not recognized. If i delete from the document all the sequences \nwstartdeflinemarkup\nwenddeflinemarkup the document compile without exceptions. What can be the problem?

    Read the article

  • Where can I find game postmortems with a programmer perspective [on hold]

    - by Ken
    There are a number of interesting game post-mortems in places like GDC vault or gamastura.com. The post-mortems are generally give with a CEO/manager perspective or a designer perspective, or, more often a combination of both e.g DOOM postmortem But I have not been able to find many post-mortems which are primarily from the programmers perspective. I'm looking for discussions and rational for technical choices and tradeoffs and how technical problems were overcome. The motivation here is to learn what kind of problems real game programmers encounter and how they go about solving them. A perfect example of what I'm looking for is Renaud Bédard's excellent GDC talk on the development of Fez, "Cubes all the way down". Where can I find more like that?

    Read the article

  • Is there a point to writing in C or C++ instead of C# without knowing specifically what would make a program faster?

    - by user828584
    I wrote a small library in Python for handling the xbox 360's STFS files to be used on my web applications. I would like to rewrite it for use in the many desktop programs people are writing for 360 game modding, but I'm not quite if I should continue using C# or delve into C++ or even C. STFS is an in-file file system used by the xbox 360 and the job of the library would be extracting/injecting files, which could take noticeable amounts of time to do. What I know in C# comes from internet tutorials and resources, as would anything I learn about C++, so what I'm asking is if it's better to bring myself to a slightly lower-level language without knowing beforehand the features of the language that increase performance, or continue assuming that compiler optimizations and that my lack of experience will mean that the language I choose won't matter.

    Read the article

  • What is So Unique About Node.js?

    - by Adrian Shum
    Recently there has been a lot of praise for Node.js. I am not a developer that has had much exposure to network application. From my bare understanding of Nodes.js, its strength is: we have only one thread handling multiple connections, providing an event-based architecture. However, for example in Java, I can create only one thread using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc) ? Given JVM being a even more mature VM than V8 (I expect it to run faster too), and event-based handling architecture seems to be something not difficult to create, I am not sure why Node.js is attracting so much attention. Did I miss some important points?

    Read the article

  • Should I modify an entity with many parameters or with the entity itself?

    - by Saeed Neamati
    We have a SOA-based system. The service methods are like: UpdateEntity(Entity entity) For small entities, it's all fine. However, when entities get bigger and bigger, to update one property we should follow this pattern in UI: Get parameters from UI (user) Create an instance of the Entity, using those parameters Get the entity from service Write code to fill the unchanged properties Give the result entity to the service Another option that I've experienced in previous experiences is to create semantic update methods for each update scenario. In other words instead of having one global all-encompasing update method, we had many ad-hoc parametric methods. For example, for the User entity, instead of having UpdateUser (User user) method, we had these methods: ChangeUserPassword(int userId, string newPassword) AddEmailToUserAccount(int userId, string email) ChangeProfilePicture(int userId, Image image) ... Now, I don't know which method is truly better, and for each approach, we encounter problems. I mean, I'm going to design the infrastructure for a new system, and I don't have enough reasons to pick any of these approaches. I couldn't find good resources on the Internet, because of the lack of keywords I could provide. What approach is better? What pitfalls each has? What benefits can we get from each one?

    Read the article

  • What is the difference between industrial development and open source development?

    - by Ida
    Intuitively, I think open source development should be much more "casual" than industrial development process (like in Microsoft). Because for OSS development: Duty separation is not that strict than in big companies (maybe developers == testers in open source development?) People come in and out of the open source community, much more frequently than in big companies However, above are just my guesses. I really want to know more about the major difference between the open source and industrial development. Is their division of duty totally different (e.g., is there a leader/manager-like role in open source development?)? Maybe it is their communication style that differs a lot? Or their workflow? Please share your opinions. Thanks a lot!

    Read the article

  • graphical interface when using assembly language

    - by Hellbent
    Im looking to use assembly language to make a great game, not just an average game but a really great game. I want to learn a framework to use in assembly. I know thats not possible without learning the framework in c first. So im thinking of learning sdl in c and then learn, teach myself, how to interpret the program and run it as assembly language code which shouldnt be that hard. Then i will have a window and some graphics routines to display the game while using assembly to code everything in. I need to spend some time learning sdl and then some more time learning how to code all those statements using assembly while calling c functions and knowing what registers returned calls use and what they leave etc. My question is , Is this a good way to go or is there something better to get a graphical window display using assembly language? Regards HellBent

    Read the article

  • Attributes of an Ethical Programmer?

    - by ahmed
    Software that we write has ramifications in the real world. If not, it wouldn't be very useful. Thus, it has the potential to sweep across the world faster than a deadly manmade virus or to affect society every bit as much as genetic manipulation. Maybe we can't see how right now, but in the future our code will have ever-greater potential for harm or good. Of course, there's the issue of hacking. That's clearly a crime. Or is it that clear? Isn't hacking acceptable for our government in the event of national security? What about for other governments? Cases of life-and-death emergency? Tracking down deadbeat parents? Screening the genetic profile of job candidates? Where is the line drawn? Who decides? Do programmers have responsibility for how their code is used? What if a programmer writes code to pry into confidential information or copy-protected material? Does he bear responsibility along with the person who used the program? What about a programmer who knowingly or unknowingly writes code to "fix the books?" Should he be liable?

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • What is the philosophy/reasoning behind C#'s Pascal-casing method names?

    - by Nocturne
    I'm just starting to learn C#. Coming from a background in Java, C++ and Objective-C, I find C#'s Pascal-casing its method-names rather unique, and a tad difficult to get used to at first. What is the reasoning and philosophy behind this? I'm guessing it is because of C# properties. Unlike in Objective-C, where method names can be exactly the same as an instance variables, this is not the case with C#. I would guess one of the goals with properties (as it is with most of the languages that support it) is to make properties truly indistinguishable from variables and methods. So, one can have an "int x" in C#, and the corresponding property becomes X. To ensure that properties and methods are indistinguishable, all method names I'm guessing are also therefore expected to start with an uppercase letter. (This is just my hypothesis based on what I know of C# so far—I'm still learning). I'm very curious to know how this curious guideline came into being (given that it's not something one sees in most other languages where method names are expected to start with a lowercase letter) (EDIT: By Pascal-casing, I mean PascalCase (which is basically camelCase but starting with a capital letter). Method names typically start with a lowercase letter in most languages)

    Read the article

  • "Imprinting" as a language feature?

    - by MKO
    Idea I had this idea for a language feature that I think would be useful, does anyone know of a language that implements something like this? The idea is that besides inheritance a class can also use something called "imprinting" (for lack of better term). A class can imprint one or several (non-abstract) classes. When a class imprints another class it gets all it's properties and all it's methods. It's like the class storing an instance of the imprinted class and redirecting it's methods/properties to it. A class that imprints another class therefore by definition also implements all it's interfaces and it's abstract class. So what's the point? Well, inheritance and polymorphism is hard to get right. Often composition gives far more flexibility. Multiple inheritance offers a slew of different problems without much benefits (IMO). I often write adapter classes (in C#) by implementing some interface and passing along the actual methods/properties to an encapsulated object. The downside to that approach is that if the interface changes the class breaks. You also you have to put in a lot of code that does nothing but pass things along to the encapsulated object. A classic example is that you have some class that implements IEnumerable or IList and contains an internal class it uses. With this technique things would be much easier Example (c#) [imprint List<Person> as peopleList] public class People : PersonBase { public void SomeMethod() { DoSomething(this.Count); //Count is from List } } //Now People can be treated as an List<Person> People people = new People(); foreach(Person person in people) { ... } peopleList is an alias/variablename (of your choice)used internally to alias the instance but can be skipped if not needed. One thing that's useful is to override an imprinted method, that could be achieved with the ordinary override syntax public override void Add(Person person) { DoSomething(); personList.Add(person); } note that the above is functional equivalent (and could be rewritten by the compiler) to: public class People : PersonBase , IList<Person> { private List<Person> personList = new List<Person>(); public override void Add(object obj) { this.personList.Add(obj) } public override int IndexOf(object obj) { return personList.IndexOf(obj) } //etc etc for each signature in the interface } only if IList changes your class will break. IList won't change but an interface that you, someone in your team, or a thirdparty has designed might just change. Also this saves you writing a whole lot of code for some interfaces/abstract classes. Caveats There's a couple of gotchas. First we, syntax must be added to call the imprinted classes's constructors from the imprinting class constructor. Also, what happends if a class imprints two classes which have the same method? In that case the compiler would detect it and force the class to define an override of that method (where you could chose if you wanted to call either imprinted class or both) So what do you think, would it be useful, any caveats? It seems it would be pretty straightforward to implement something like that in the C# language but I might be missing something :) Sidenote - Why is this different from multiple inheritance Ok, so some people have asked about this. Why is this different from multiple inheritance and why not multiple inheritance. In C# methods are either virtual or not. Say that we have ClassB who inherits from ClassA. ClassA has the methods MethodA and MethodB. ClassB overrides MethodA but not MethodB. Now say that MethodB has a call to MethodA. if MethodA is virtual it will call the implementation that ClassB has, if not it will use the base class, ClassA's MethodA and you'll end up wondering why your class doesn't work as it should. By the terminology sofar you might already confused. So what happens if ClassB inherits both from ClassA and another ClassC. I bet both programmers and compilers will be scratching their heads. The benefit of this approach IMO is that the imprinting classes are totally encapsulated and need not be designed with multiple inheritance in mind. You can basically imprint anything.

    Read the article

  • Is the copy/paste approach professionally viable when working with the Google Maps API?

    - by Ian Campbell
    I find that I understand much of the Javascript concepts used in the Google Maps API code, but then again there is quite a bit that is way over my head in syntax. For example, the geocoder syntax seems to be of Ajax form, though I don't understand what is happening under the hood (especially with lines like results[0].geometry.location). I am able to modify the body of if (status == google.maps.GeocoderStatus.OK) for different purposes though. So, being that I am able to take various code from the Developer's Guide and rework it to an extent for my own purposes, all the while not fully understanding what Google Maps is actually doing, does this make me a copy-paste programmer? Is this a bad practice, or is this professionally viable? I am, of course, interested in learning as much as I can, but what if time-constraints outweigh the learning process?

    Read the article

  • Defining formula through user interface in user form

    - by BriskLabs Pakistan
    I am a student and developing a simple assignment - windows form application in visual studio 2010. The application is suppose to construct formulas as per user requirement. The process: It has to pick data from columns of Microsoft Access database and the user should be able to pick the data by column name like we do in a drop down menu. and create reusable formulas in it ( configure it once and can change it again). followings are column titles from database that can be picked for example. e.g Col -1 : Marks in Maths Col -2 : Total Marks in Maths Col -3 : Marks in science Col -4 : Total marks in science Finally we should be able to construct any formula in the UI like (Col 1 + Col 3 ) / ( col 2 + col 4) = Formula 1 once this is formula is set saved and a name is assigned to it by user. he/she can use the formula and results shall appear in a window below. i.e He would be able to calculate his desired figures (formula) by only manipulating underlying data on the UI layer....choose the data for a period and apply the formula and get the answer Problem: It looks like I have to create an app where rules are set through UI....... this means no stored procedures are required in SQL.... please suggest the right approach.

    Read the article

  • Should I forward the a call to .Equals onto .Equals<T>?

    - by Jaimal Chohan
    So, I've got you bog standard c# object, overriding Equalsand implementing IEquatable public override int GetHashCode() { return _name.GetHashCode(); } public override bool Equals(object obj) { return Equals(obj as Tag) } #region IEquatable<Tag> Members public bool Equals(Tag other) { if (other == null) return false; else return _name == other._name; } #endregion Now, for some reason, I used to think that forwarding the calls from Equals into Equals was bad, no idea why, perhaps I read it a long time ago, anyway I'd write separate (but logically same) code for each method. Now I think forwarding Equals to Equals is okay, for obvious reasons, but for the life me I can't remember why I thought it wasn't before. Any thoughts?

    Read the article

  • OpenGL setup on Windows

    - by kevin james
    I have been trying to use OpenGL for two days now. First on Mac, then on Windows. The problem with Mac is that it doesn't support the newer versions of OpenGL. I ran a tutorial that actually did get some things working, but it only works in XCode (i.e., I can't create a new file, paste in the same code, and get it to work). Because of these issues, I moved to Windows. My Windows 7 has OpenGL 4.3, which is the same that is used in alot of other tutorials. However, not one of these tutorials gives any instruction on how to set it up for the first time. I have come across some vague posts saying that some libraries need to be linked. But WHAT libraries, and HOW do I link them? Please help. I am pretty desperate to set this up as this project is due for work soon. I have actually used OpenGL before at my university, but the computers already had everything set up. The project itself is very easy, but setting up OpenGL is not something I know how to do.

    Read the article

  • Objects won't render when Texture Compression + Mipmapping is Enabled

    - by felipedrl
    I'm optimizing my game and I've just implemented compressed (DXTn) texture loading in OpenGL. I've worked my way removing bugs but I can't figure out this one: objects w/ DXTn + mipmapped textures are not being rendered. It's not like they are appearing with a flat color, they just don't appear at all. DXTn textured objs render and mipmapped non-compressed textures render just fine. The texture in question is 256x256 I generate the mips all the way down 4x4, i.e 1 block. I've checked on gDebugger and it display all the levels (7) just fine. I'm using GL_LINEAR_MIPMAP_NEAREST for min filter and GL_LINEAR for mag one. The texture is being compressed and mipmaps being created offline with Paint.NET tool using super sampling method. (I also tried bilinear just in case) Source follow: [SNIPPET 1: Loading DDS into sys memory + Initializing Object] // Read header DDSHeader header; file.read(reinterpret_cast<char*>(&header), sizeof(DDSHeader)); uint pos = static_cast<uint>(file.tellg()); file.seekg(0, std::ios_base::end); uint dataSizeInBytes = static_cast<uint>(file.tellg()) - pos; file.seekg(pos, std::ios_base::beg); // Read file data mData = new unsigned char[dataSizeInBytes]; file.read(reinterpret_cast<char*>(mData), dataSizeInBytes); file.close(); mMipmapCount = header.mipmapcount; mHeight = header.height; mWidth = header.width; mCompressionType = header.pf.fourCC; // Only support files divisible by 4 (for compression blocks algorithms) massert(mWidth % 4 == 0 && mHeight % 4 == 0); massert(mCompressionType == NO_COMPRESSION || mCompressionType == COMPRESSION_DXT1 || mCompressionType == COMPRESSION_DXT3 || mCompressionType == COMPRESSION_DXT5); // Allow textures up to 65536x65536 massert(header.mipmapcount <= MAX_MIPMAP_LEVELS); mTextureFilter = TextureFilter::LINEAR; if (mMipmapCount > 0) { mMipmapFilter = MipmapFilter::NEAREST; } else { mMipmapFilter = MipmapFilter::NO_MIPMAP; } mBitsPerPixel = header.pf.bitcount; if (mCompressionType == NO_COMPRESSION) { if (header.pf.flags & DDPF_ALPHAPIXELS) { // The only format supported w/ alpha is A8R8G8B8 massert(header.pf.amask == 0xFF000000 && header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGBA8; mFormat = GL_BGRA; mDataType = GL_UNSIGNED_BYTE; } else { massert(header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGB8; mFormat = GL_BGR; mDataType = GL_UNSIGNED_BYTE; } } else { uint blockSizeInBytes = 16; switch (mCompressionType) { case COMPRESSION_DXT1: blockSizeInBytes = 8; if (header.pf.flags & DDPF_ALPHAPIXELS) { mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT; } else { mInternalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT; } break; case COMPRESSION_DXT3: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT; break; case COMPRESSION_DXT5: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT; break; default: // Not Supported (DXT2, DXT4 or any compression format) massert(false); } } [SNIPPET 2: Uploading into video memory] massert(mData != NULL); glGenTextures(1, &mHandle); massert(mHandle!=0); glBindTexture(GL_TEXTURE_2D, mHandle); commitFiltering(); uint offset = 0; Renderer* renderer = Renderer::getInstance(); switch (mInternalFormat) { case GL_RGB: case GL_RGBA: case GL_RGB8: case GL_RGBA8: for (uint i = 0; i < mMipmapCount + 1; ++i) { uint width = std::max(1U, mWidth >> i); uint height = std::max(1U, mHeight >> i); glTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, mFormat, mDataType, &mData[offset]); offset += width * height * (mBitsPerPixel / 8); } break; case GL_COMPRESSED_RGB_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT3_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT5_EXT: { uint blockSize = 16; if (mInternalFormat == GL_COMPRESSED_RGB_S3TC_DXT1_EXT || mInternalFormat == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) { blockSize = 8; } uint width = mWidth; uint height = mHeight; for (uint i = 0; i < mMipmapCount + 1; ++i) { uint nBlocks = ((width + 3) / 4) * ((height + 3) / 4); // Only POT textures allowed for mipmapping massert(width % 4 == 0 && height % 4 == 0); glCompressedTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, nBlocks * blockSize, &mData[offset]); offset += nBlocks * blockSize; if (width <= 4 && height <= 4) { break; } width = std::max(4U, width / 2); height = std::max(4U, height / 2); } break; } default: // Not Supported massert(false); } Also I don't understand the "+3" in the block size computation but looking for a solution for my problema I've encountered people defining it as that. I guess it won't make a differente for POT textures but I put just in case. Thanks.

    Read the article

  • Why to say, my function is of IFly type rather than saying it's Airplane type

    - by Vishwas Gagrani
    Say, I have two classes: Airplane and Bird, both of them fly. Both implement the interface IFly. IFly declares a function StartFlying(). Thus both Airplane and Bird have to define the function, and use it as per their requirement. Now when I make a manual for class reference, what should I write for the function StartFlying? 1) StartFlying is a function of type IFly . 2) StartFlying is a function of type Airplane 3) StartFlying is a function of type Bird. My opinion is 2 and 3 are more informative. But what i see is that class references use the 1st one. They say what interface the function is declared in. Problem is, I really don't get any usable information from knowing StartFlying is IFly type. However, knowing that StartFlying is a function inside Airplane and Bird, is more informative, as I can decide which instance (Airplane or Bird ) to use. Any lights on this: how saying StartFlying is a function of type IFly, can help a programmer understanding how to use the function?

    Read the article

  • Is it bad practice to output from within a function?

    - by Nick
    For example, should I be doing something like: <?php function output_message($message,$type='success') { ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php } output_message('There were some errors processing your request','error'); ?> or <?php function output_message($message,$type='success') { ob_start(); ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php return ob_get_clean(); } echo output_message('There were some errors processing your request','error'); ?> I understand they both achieve the same end result, but are there benefits doing one way over the other? Or does it not even matter?

    Read the article

  • Techniques for separating game model from presentation

    - by liortal
    I am creating a simple 2D game using XNA. The elements that make up the game world are what i refer to as the "model". For instance, in a board game, i would have a GameBoard class that stores information about the board. This information could be things such as: Location Size Details about cells on the board (occupied/not occupied) etc This object should either know how to draw itself, or describe how to draw itself to some other entity (renderer) in order to be displayed. I believe that since the board only contains the data+logic for things regarding it or cells on it, it should not provide the logic of how to draw things (separation of concerns). How can i achieve a good partitioning and easily allow some other entity to draw it properly? My motivations for doing so are: Allow multiple "implementations" of presentation for a single game entity Easier porting to other environments where the presentation code is not available (for example - porting my code to Unity or other game technology that does not rely on XNA).

    Read the article

  • Should we consider code language upon design?

    - by Codex73
    Summary This question aims to conclude if an applications usage will be a consideration when deciding upon development language. What factors if any could be considered upon language writing could be taken into context. Application Type: Web Question Of the following popular languages, when should we use one or the other? What factors if any could be considered upon language writing could be taken into context. Languages PHP Ruby Python My initial thought is that language shouldn't be considered as much as framework. Things to consider on framework are scalability, usage, load, portability, modularity and many more. Things to consider on Code Writing maybe cost, framework stability, community, etc.

    Read the article

  • Thoughts on type aliases/synonyms?

    - by Rei Miyasaka
    I'm going to try my best to frame this question in a way that doesn't result in a language war or list, because I think there could be a good, technical answer to this question. Different languages support type aliases to varying degrees. C# allows type aliases to be declared at the beginning of each code file, and they're valid only throughout that file. Languages like ML/Haskell use type aliases probably as much as they use type definitions. C/C++ are sort of a Wild West, with typedef and #define often being used seemingly interchangeably to alias types. The upsides of type aliasing don't invoke too much dispute: It makes it convenient to define composite types that are described naturally by the language, e.g. type Coordinate = float * float or type String = [Char]. Long names can be shortened: using DSBA = System.Diagnostics.DebuggerStepBoundaryAttribute. In languages like ML or Haskell, where function parameters often don't have names, type aliases provide a semblance of self-documentation. The downside is a bit more iffy: aliases can proliferate, making it difficult to read and understand code or to learn a platform. The Win32 API is a good example, with its DWORD = int and its HINSTANCE = HANDLE = void* and its LPHANDLE = HANDLE FAR* and such. In all of these cases it hardly makes any sense to distinguish between a HANDLE and a void pointer or a DWORD and an integer etc.. Setting aside the philosophical debate of whether a king should give complete freedom to their subjects and let them be responsible for themselves or whether they should have all of their questionable actions intervened, could there be a happy medium that would allow the benefits of type aliasing while mitigating the risk of its abuse? As an example, the issue of long names can be solved by good autocomplete features. Visual Studio 2010 for instance will alllow you to type DSBA in order to refer Intellisense to System.Diagnostics.DebuggerStepBoundaryAttribute. Could there be other features that would provide the other benefits of type aliasing more safely?

    Read the article

  • Should I concentrate on writing code for money or my studies while in college?

    - by A-Cube
    I am college student of Software Engineering. My worries are that while I am concentrating on my studies, my peers are getting down with the code (e.g. HTML, ASP, PHP, etc) to earn money. Should I be worried that I am not doing coding like them? I was asked to be Microsoft Student Partner but I refused because the person what was doing before me told it was just arranging events. Nothing as such like getting with Microsoft and coding. Should I be writing code and earning money as I still am in 4th semester? I only have C++ as learning language in college. Will my job count on these projects that I do, or should I concentrate on studies for now to get maximum benefit?

    Read the article

  • Motivation and use of move constructors in C++

    - by Giorgio
    I recently have been reading about move constructors in C++ (see e.g. here) and I am trying to understand how they work and when I should use them. As far as I understand, a move constructor is used to alleviate the performance problems caused by copying large objects. The wikipedia page says: "A chronic performance problem with C++03 is the costly and unnecessary deep copies that can happen implicitly when objects are passed by value." I normally address such situations by passing the objects by reference, or by using smart pointers (e.g. boost::shared_ptr) to pass around the object (the smart pointers get copied instead of the object). What are the situations in which the above two techniques are not sufficient and using a move constructor is more convenient?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >