Search Results

Search found 15377 results on 616 pages for 'socket programming'.

Page 109/616 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Why Adobe Air is so underrated for building mobile apps?

    - by Marcelo de Assis
    I worked with Adobe Flash related technologies for the last 5 years, although not being a big fan of Adobe. I see some little bugs happening in some apps, but I cannot imagine why a lot of big companies do not even think to use use Adobe Air as a good technology for their mobile apps. I see a lot of mobile developer positions asking for experts in Android or iOS , but very much less positions asking for Adobe Air, even when Adobe Air apps have the advantage of being multi-plataform, with the same app working in Blackberry, iOS and Android. Is so much easier to develop a game using Flash, than using Android SDK, for example. It really have flaws (that I never saw) or it is just some kind of mass prejudgement? I also would like to hear what a project manager or a indie developer takes when choosing a plataform for building apps.

    Read the article

  • What does "context-free" mean in the term "context-free grammar"?

    - by rick
    Given the amount of material that tries to explain what a context-free grammar (CFG) is, I found it surprising that very few (in my sample, less than 1 in 20) give an explanation on why such grammars are called "context-free". And, to my mind, none succeeds in doing so. My question is, why are context-free grammars called context-free? What is "the context"? I had an intuition that the context could be other language constructs surrounding the currently analyzed construct, but that seems not to be the case. Could anyone provide a precise explanation?

    Read the article

  • Is Linq having a mind-numbing effect on .NET programmers?

    - by Aaronaught
    A lot of us started seeing this phenomenon with jQuery about a year ago when people started asking how to do absolutely insane things like retrieve the query string with jQuery. The difference between the library (jQuery) and the language (JavaScript) is apparently lost on many programmers, and results in a lot of inappropriate, convoluted code being written where it is not necessary. Maybe it's just my imagination, but I swear I'm starting to see an uptick in the number of questions where people are asking to do similarly insane things with Linq, like find ranges in a sorted array. I can't get over how thoroughly inappropriate the Linq extensions are for solving that problem, but more importantly the fact that the author just assumed that the ideal solution would involve Linq without actually thinking about it (as far as I can tell). It seems that we are repeating history, breeding a new generation of .NET programmers who can't tell the difference between the language (C#/VB.NET) and the library (Linq). What is responsible for this phenomenon? Is it just hype? Magpie tendencies? Has Linq picked up a reputation as a form of magic, where instead of actually writing code you just have to utter the right incantation? I'm hardly satisfied with those explanations but I can't really think of anything else. More importantly, is it really a problem, and if so, what's the best way to help enlighten these people?

    Read the article

  • Exploring your database schema with SQL

    In the second part of Phil's series of articles on finding stuff (such as objects, scripts, entities, metadata) in SQL Server, he offers some scripts that should be handy for the developer faced with tracking down problem areas and potential weaknesses in a database.

    Read the article

  • What's a nice explanation for pointers?

    - by Macneil
    In your own studies (on your own, or for a class) did you have an "ah ha" moment when you finally, really understood pointers? Do you have an explanation you use for beginner programmers that seems particularly effective? For example, when beginners first encounter pointers in C, they might just add &s and *s until it compiles (as I myself once did). Maybe it was a picture, or a really well motivated example, that made pointers "click" for you or your student. What was it, and what did you try before that didn't seem to work? Were any topics prerequisites (e.g. structs, or arrays)? In other words, what was necessary to understand the meaning of &s and *, when you could use them with confidence? Learning the syntax and terminology or the use cases isn't enough, at some point the idea needs to be internalized. Update: I really like the answers so far; please keep them coming. There are a lot of great perspectives here, but I think many are good explanations/slogans for ourselves after we've internalized the concept. I'm looking for the detailed contexts and circumstances when it dawned on you. For example: I only somewhat understood pointers syntactically in C. I heard two of my friends explaining pointers to another friend, who asked why a struct was passed with a pointer. The first friend talked about how it needed to be referenced and modified, but it was just a short comment from the other friend where it hit me: "It's also more efficient." Passing 4 bytes instead of 16 bytes was the final conceptual shift I needed.

    Read the article

  • Someone tell me why people writing GNU code use { like that?

    - by Deus Deceit
    I saw the rules on how to type code for UNITY project or GNU software in general. Why do they write code in such an ugly form? Is there a particular reason why they don't put brackets the way (from what I know) most people do? Why like this: for (i = 0; i < 5; i++) { //do something } and not like this: for (i = 0; i < 5; i++) { //Do something } or this: for(i = 0; i < 5l i++) { //Do Something } ???

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • developers-designers-testers interaction [closed]

    - by user29124
    Sorry for my bad English, and also you may not read this and waste your time, because it is just a lament of layman developer... Seems no one want to learn anything at my workplace. We have Mantis bug tracker, but our testers use google-docs for reports and only developers and team lead report bugs in Mantis. We have SVN for version control and use Smarty as template system, but our designers give us pure HTML (sometimes it's ugly for programmers, but mostly it's OK) in archives, and changes to design made by programmers go nowhere (I mean designers use their own obsolete HTML and CSS most of the time). We have a testing environment but designers don't have access with restricted accounts to it. So we can only ask them where to look for the problem and then investigate the problem by ourselves (and made changes to CSS by ourselves (that go nowhere most of the time...)). I will not mention legacy code without documentation, tests, or any requirements, just an absence of real interaction in triangle programmers-designers-testers. I'm not talking about using HAML, SASS, continuous integration, or something else, just about using basic tools by all participants of the development process. Maybe the absence of communication is not a problem in short-time projects, which will finish up in 2 months time but rather on the types of projects that lasts for years. Any comments please...

    Read the article

  • Appropriate level of granularity for component-based architecture

    - by Jon Purdy
    I'm working on a game with a component-based architecture. An Entity owns a set of Component instances, each of which has a set of Slot instances with which to store, send, and receive values. Factory functions such as Player produce entities with the required components and slot connections. I'm trying to determine the best level of granularity for components. For example, right now Position, Velocity, and Acceleration are all separate components, connected in series. Velocity and Acceleration could easily be rewritten into a uniform Delta component, or Position, Velocity, and Acceleration could be combined alongside such components as Friction and Gravity into a monolithic Physics component. Should a component have the smallest responsibility possible (at the cost of lots of interconnectivity) or should related components be combined into monolithic ones (at the cost of flexibility)? I'm leaning toward the former, but I could use a second opinion.

    Read the article

  • Would you use UML if it kept stakeholders from requesting changes frequently?

    - by Huperniketes
    As much as programmers hate to document their code/system and draw UML (especially, Sequencing, Activity and State machine diagrams) or other diagramming notation, would you agree to do it if it kept managers from requesting a "minor change" every couple of weeks? IOW, would you put together visual models to document the system if it helped you demonstrate to managers what the effect of changes are and why it takes so long to implement them? (Edited to help programmers understand what type of answer I'm looking for.) 2nd edit: Restating my question again, "Would you be willing to use some diagramming notation, against your better nature as a programmer, if it helped you manage change requests?" This question isn't asking if there might be something wrong with the process. It's a given that there's something wrong with the process. Would you be willing to do more work to improve it?

    Read the article

  • Get Hands On with Raspberry Pi via Free OS-Building Course

    - by Jason Fitzpatrick
    Cambridge University is now offering a free 12-segment course that will guide you through building an OS from scratch for the tiny Raspberry Pi development board–learn the ins and outs of basic OS design on the cheap. You’ll need a Raspberry Pi board, a computer running Windows, OS X, or Linux, and an SD card, as well as a small amount of free software. The 12-part tutorial starts you off with basic OS theory and then walks you through basic control of the board, graphics manipulation, and, finally, creating a command line interface for your new operating system. Hit up the link below to read more and check out the lessons. Baking Pi – Operating Systems Development HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • How to have a maintainable and manageable Javascript code base

    - by dade
    I am starting a new job soon as a frontend developer. The App I would be working on is 100% Javascript on the client side. all the server returns is an index page that loads all the Javascript files needed by the app. Now here is the problem: The whole of the application is built around having functions wrapped to different namespaces. And from what I see, a simple function like rendering the HTML of a page can be accomplished by having a call to 2 or more functions across different namespace... My initial thought was "this does not feel like the perfect solution" and I can just envisage a lot of issues with maintaining the code and extending it down the line. Now I would soon start working on taking the project forward and would like to have suggestions on good case practices when it comes to writing and managing a relatively large amount of javascript code.

    Read the article

  • Creating an expandable, cross-platform compatible program "core".

    - by Thomas Clayson
    Hi there. Basically the brief is relatively simple. We need to create a program core. An engine that will power all sorts of programs with a large number of distinct potential applications and deployments. The core will be an analytics and algorithmic processor which will essentially take user-specific input and output scenarios based on the information it gets, whilst recording this information for reporting. It needs to be cross platform compatible. Something that can have platform specific layers put on top which can interface with the core. It also needs to be able to be expandable, for instance, modular with developers being able to write "add-ons" or "extensions" which can alter the function of the end program and can use the core to its full extent. (For instance, a good example of what I'm looking to create is a browser. It has its main core, the web-kit engine, for instance, and then on top of this is has a platform-specific GUI and can also have add-ons and extensions which can change the behavior of the program.) Our problem is that the extensions need to interface directly with the main core and expand/alter that functionality rather than the platform specific "layer". So, given that I have no experience in this whatsoever (I have a PHP background and recently objective-c), where should I start, and is there any knowledge/wisdom you can impart on me please? Thanks for all the help and advice you can give me. :) If you need any more explanation just ask. At the moment its in the very early stages of development, so we're just researching all possible routes of development. Thanks a lot

    Read the article

  • Why do some people hate Dart? [closed]

    - by Hassan
    First, I'd like to note that this question is not intended to compare two languages or technologies, but is only asking about criticisms aimed at a language. I've always thought it a good idea to somehow get rid of Javascript. It works, but it's just so messy. I think many will agree with me there. And that's how I interpreted Google's release of Dart. It seems to me like a very good alternative to Javascript. Now, it looks like some are not very happy that Google has released this new language. Take a look at this Wikipedia page to see what I'm talking about. If you don't feel like reading it, I'll tell you now that some seem to think that Dart is similar to Microsoft's VBScript, in that it only works on Microsoft's browsers. This goes against the web's openness. But it's my understanding that Dart can be compiled to Javascript, which will allow it to be run on any modern browser (as the Wikipedia article also states). So my question is: are these criticisms valid? Is there a real fear that Google is trying to control the web's front-end to be more compatible with its browser?

    Read the article

  • if/else statements or exceptions

    - by Thaven
    I don't know, that this question fit better on this board, or stackoverflow, but because my question is connected rather to practices, that some specified problem. So, consider an object that does something. And this something can (but should not!) can go wrong. So, this situation can be resolved in two way: first, with exceptions: DoSomethingClass exampleObject = new DoSomethingClass(); try { exampleObject.DoSomething(); } catch (ThisCanGoWrongException ex) { [...] } And second, with if statement: DoSomethingClass exampleObject = new DoSomethingClass(); if(!exampleObject.DoSomething()) { [...] } Second case in more sophisticated way: DoSomethingClass exampleObject = new DoSomethingClass(); ErrorHandler error = exampleObject.DoSomething(); if (error.HasError) { if(error.ErrorType == ErrorType.DivideByPotato) { [...] } } which way is better? In one hand, I heard that exception should be used only for real unexpected situations, and if programist know, that something may happen, he should used if/else. In second hand, Robert C. Martin in his book Clean Code Wrote, that exception are far more object oriented, and more simple to keep clean.

    Read the article

  • Basis of definitions

    - by Yttrill
    Let us suppose we have a set of functions which characterise something: in the OO world methods characterising a type. In mathematics these are propositions and we have two kinds: axioms and lemmas. Axioms are assumptions, lemmas are easily derived from them. In C++ axioms are pure virtual functions. Here's the problem: there's more than one way to axiomatise a system. Given a set of propositions or methods, a subset of the propositions which is necessary and sufficient to derive all the others is called a basis. So too, for methods or functions, we have a desired set which must be defined, and typically every one has one or more definitions in terms of the others, and we require the programmer to provide instance definitions which are sufficient to allow all the others to be defined, and, if there is an overspecification, then it is consistent. Let me give an example (in Felix, Haskell code would be similar): class Eq[t] { virtual fun ==(x:t,y:t):bool => eq(x,y); virtual fun eq(x:t, y:t)=> x == y; virtual fun != (x:t,y:t):bool => not (x == y); axiom reflex(x:t): x == x; axiom sym(x:t, y:t): (x == y) == (y == x); axiom trans(x:t, y:t, z:t): implies(x == y and y == z, x == z); } Here it is clear: the programmer must define either == or eq or both. If both are defined, the definitions must be equivalent. Failing to define one doesn't cause a compiler error, it causes an infinite loop at run time. Defining both inequivalently doesn't cause an error either, it is just inconsistent. Note the axioms specified constrain the semantics of any definition. Given a definition of == either directly or via a definition of eq, then != is defined automatically, although the programmer might replace the default with something more efficient, clearly such an overspecification has to be consistent. Please note, == could also be defined in terms of !=, but we didn't do that. A characterisation of a partial or total order is more complex. It is much more demanding since there is a combinatorial explosion of possible bases. There is an reason to desire overspecification: performance. There also another reason: choice and convenience. So here, there are several questions: one is how to check semantics are obeyed and I am not looking for an answer here (way too hard!). The other question is: How can we specify, and check, that an instance provides at least a basis? And a much harder question: how can we provide several default definitions which depend on the basis chosen?

    Read the article

  • How to handle a player's level and its consequent privileges?

    - by Songo
    I'm building a game similar to Mafia Wars where a player can do tasks for his gang and gain experience and thus advancing his level. The game is built using PHP and a Mysql database. In the game I want to limit the resources allowed to player based on his level. For example: ________| (Max gold) | (Max army size) | (Max moves) | ... Level 1 | 1000 | 100 | 10 | ... Level 2 | 1500 | 200 | 20 | ... Level 3 | 3000 | 300 | 25 | ... . . . In addition certain features of the game won't be allowed until a certain level is reached such as players under Level 10 can't trade in the game market, players under Level 20 can't create alliances,...etc. The way I have modeled it is by implementing a very loooong ACL (Access Control List) with about 100 entries (an entry for each level). However, I think there may be a simpler approach to this seeing that this feature have been implemented in many games before.

    Read the article

  • How do I implement a score database in Android?

    - by Michael Seun Araromi
    I making a 2D game for Android using OpenGL-ES technology. It is a space shooting game where the player shoots enemy ships. I want to keep a track of score for the amount of enemy ships destroyed and a record of a local highscore. The score should be incremented whenever an enemy is destroyed. I also want a way of displaying both the current score and highscore on the game screen. I am not familiar with databases at all and I will appreciate a clear answer or a link to a good tutorial for my cause. Thanks.

    Read the article

  • Writing Large Portions Of Code Then Debugging?

    - by The Floating Brain
    Lately I have been writing a game engine, and I have been writing a lot of "foundation stuff" (standard interfaces, modules, a message system ect.), but I have noticed a pattern, a lot of the stuff is interdependent and I can not debug until everything is done, hence I do not debug for about 3 to 5 hours at a time. I am wondering if this is an acceptable practice for this part of the project, and if not, if anyone can give me some advice? -----Update-----: I downloaded some code metrics tools, and my programs cyclomatic complexity is 1.52 which as I understand it is good, and should correlate to high cohesion, if I am wrong please correct me/

    Read the article

  • Overloading interface buttons, what are the best practices?

    - by XMLforDummies
    Imagine you'll have always a button labeled "Continue" in the same position in your app's GUI. Would you rather make a single button instance that takes different actions depending on the current state? private State currentState = State.Step1; private ContinueButton_Click() { switch(currentState) { case State.Step1: DoThis(); currentState = State.Step2; break; case State.Step2: DoThat(); break; } } Or would you rather have something like this? public Form() { this.ContinueStep2Button.Visible = false; } private ContinueStep1Button_Click() { DoThis(); this.ContinueStep1Button.Visible = false; this.ContinueStep2Button.Visible = true; } private ContinueStep2Button_Click() { DoThat(); }

    Read the article

  • Why should I use List<T> over IEnumerable<T>?

    - by Rowan Freeman
    In my ASP.net MVC4 web application I use IEnumerables, trying to follow the mantra to program to the interface, not the implementation. Return IEnumerable(Of Student) vs Return New List(Of Student) People are telling me to use List and not IEnumerable, because lists force the query to be executed and IEumerable does not. Is this really best practice? Is there any alternative? I feel strange using concrete objects where an interface could be used. Is my strange feeling justified?

    Read the article

  • What attracts software developers such as yourselves to choose to program for the Android mobile platform?

    - by Hasnan Karim
    Dear programmers, as part of my final year university project, I am conducting research into what makes programmers prefer to program for Android as opposed to other mobile operating systems. The description does not need to be detailed however, I am trying to find patterns between programmers to determine what properties (other than money) a software company such as Android must have in order to attract programmers and therefore grow.

    Read the article

  • How to begin in Game Development? [closed]

    - by Bladimir Ruiz
    It's been a while since I decide to get into game dev, but, there are so many ways to make a game, that i dont know where to begin, I got unity 3d license for PC/Android/Ios for free, but i Also got XNA dev tool, ALSO have CoronaSDK.. But I dont Know wich one to use. Till' now all i want is to make a Sidecroller lime Super Mario Bros, Just for start later on, i will like to make diferent games. In the future i would like to work in the game industry which tools will be the best to Start in that "Dream"?

    Read the article

  • Why not have a High Level Language based OS? Are Low Level Languages more efficient?

    - by rtindru
    Without being presumptuous, I would like you to consider the possibility of this. Most OS today are based on pretty low level languages (mainly C/C++) Even the new ones such as Android uses JNI & underlying implementation is in C In fact, (this is a personal observation) many programs written in C run a lot faster than their high level counterparts (eg: Transmission (a bittorrent client on Ubuntu) is a whole lot faster than Vuze(Java) or Deluge(Python)). Even python compilers are written in C, although PyPy is an exception. So is there a particular reason for this? Why is it that all our so called "High Level Languages" with the great "OOP" concepts can't be used in making a solid OS? So I have 2 questions basically. Why are applications written in low level languages more efficient than their HLL counterparts? Do low level languages perform better for the simple reason that they are low level and are translated to machine code easier? Why do we not have a full fledged OS based entirely on a High Level Language?

    Read the article

  • Is reference to bug/issue in commit message considered good practice?

    - by Christian P
    I'm working on a project where we have the source control set up to automatically write notes in the bug tracker. We simply write the bug issue ID in the commit message and the commit message is added as a note to the bug tracker. I can see only a few downsides for this practice. If sometime in the future the source code gets separated from the bug tracking software (or the reported bugs/issues are somehow lost). Or when someone is looking in the history of commits but doesn't have access to our bug tracker. My question is if having a bug/issue reference in the commit message is considered good practice? Are there some other downsides?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >