Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 143/663 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • REST API rule about tunneling

    - by miku
    Just read this in the REST API Rulebook: GET and POST must not be used to tunnel other request methods. Tunneling refers to any abuse of HTTP that masks or misrepresents a message’s intent and undermines the protocol’s transparency. A REST API must not compromise its design by misusing HTTP’s request methods in an effort to accommodate clients with limited HTTP vocabulary. Always make proper use of the HTTP methods as specified by the rules in this section. [highlights by me] But then a lot of frameworks use tunneling to expose REST interfaces via HTML forms, since <form> knows only about GET and POST. My most recent example is a MethodRewriteMiddleware for flask (submitted by the author of the framework): http://flask.pocoo.org/snippets/38/. Any ways to comply to the "Rule" without hacks or add-ons in web frameworks?

    Read the article

  • Shuffling algorithm with no "self-mapping"?

    - by OregonTrail
    To randomly shuffle an array, with no bias towards any particular permutation, there is the Knuth Fischer-Yeats algorithm. In Python: #!/usr/bin/env python import sys from random import randrange def KFYShuffle(items): i = len(items) - 1 while i > 0: j = randrange(i+1) # 0 <= j <= i items[j], items[i] = items[i], items[j] i = i - 1 return items print KFYShuffle(range(int(sys.argv[1]))) There is also Sattolo's algorithm, which produces random cycles. In Python: #!/usr/bin/env python import sys from random import randrange def SattoloShuffle(items): i = len(items) while i > 1: i = i - 1 j = randrange(i) # 0 <= j <= i-1 items[j], items[i] = items[i], items[j] return items print SattoloShuffle(range(int(sys.argv[1]))) I'm currently writing a simulation with the following specifications for a shuffling algorithm: The algorithm is unbiased. If a true random number generator was used, no permutation would be more likely than any other. No number ends up at its original index. The input to the shuffle will always be A[i] = i for i from 0 to N-1 Permutations are produced that are not cycles, but still meet specification 2. The cycles produced by Sattolo's algorithm meet specification 2, but not specification 1 or 3. I've been working at creating an algorithm that meets these specifications, what I came up with was equivalent to Sattolo's algorithm. Does anyone have an algorithm for this problem?

    Read the article

  • Is it a good idea to put all assembly: WebResource in the same cs file?

    - by Guilherme J Santos
    I have a .NET library, with some WebControls. These webControls have Embed Resources. And we declare them like it, in all webcontrols for each cs file: Something like this: [assembly: WebResource("IO.Css.MyCSS.css", "text/css")] namespace MyNamespace.MyClass { [ParseChildren(true)] [PersistChildren(false)] [Designer(typeof(MyNamespace.MyClassDesigner))] public class QuickTip : Control, INamingContainer { //My code... } } Would it be a good idea to create a cs file and include all WebResource declarations there? Example a cs file with just: [assembly: WebResource("IO.Css.MyCSS.css", "text/css")] [assembly: WebResource("IO.Image.MyImage.png", "image/png")] //And many other WebResources of all WebControls of the Assembly

    Read the article

  • Resources for Test Driven Development in Web Applications?

    - by HorusKol
    I would like to try and implement some TDD in our web applications to reduce regressions and improve release quality, but I'm not convinced at how well automated testing can perform with something as fluffy as web applications. I've read about and tried TDD and unit testing, but the examples are 'solid' and rather simple functionalities like currency converters, and so on. Are there any resources that can help with unit testing content management and publication systems? How about unit testing a shopping cart/store (physical and online products)? AJAX? Googling for "Web Test Driven Development" just gets me old articles from several years ago either covering the same examples of calculator-like function or discussions about why TDD is better than anything (without any examples).

    Read the article

  • Integrating application ad-support - best practice

    - by Jarede
    Considering the review that came out in March: Researchers from Purdue University in collaboration with Microsoft claim that third-party advertising in free smartphone apps can be responsible for as much as 65 percent to 75 percent of an app's energy consumption. Is there a best practice for integrating advert support into mobile applications, so as to not drain user battery too much. When you fire up Angry Birds on your Android phone, the researchers found that the core gaming component only consumes about 18 percent of total app energy. The biggest battery suck comes from the software powering third-party ads and analytics accounting for 45 percent of total app energy, according to the study. Has anyone invoked better ways of keeping away from the "3G Tail", as the report puts it. Is it better/possible to download a large set of adverts that are cached for a few hours, and using them to populate your ad space, to avoid constant use of the wifi/3g radios. Are there any best practices for the inclusion of adverts in mobile apps?

    Read the article

  • What is the best way to learn how to develop secure applications

    - by Kenneth
    I would like to get into computer security in my career. What are the best ways to learn how to program securely? It seems to me that besides textbooks and taking classes in the subject that perhaps learning how to "hack" would be one of the best ways to learn. My reason for thinking this is the thought that the best way to learn how to prevent someone from doing what you don't want them to is to learn what they're capable of doing. If this is the case, then this poses another question: How would you go about learning to hack in an ethical manner? I definitely don't want to break laws or cause harm in my quest. Thanks for the input!

    Read the article

  • How to render a terrain using height maps and getting basic collision detection on top of the terrain and camera (moving on the terrain)

    - by M1kstur
    I have loaded a .RAW file into a 2x2 array in my class. The way I am rendering it works fine but I am struggling to get the camera to move on top of the terrain. The terrain renders from 0,0,0 (x,y,z) as that is where I put my camera. My camera class allows to the "camera" to move through the scene. I want to be able to "walk" on top of the terrain with some basic collision detection (if possible). Any tips on where to go for this or any tips?

    Read the article

  • Examples of permission-based authorization systems in .Net?

    - by Rachel
    I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded.

    Read the article

  • Are there examples of non CRUD approaches?

    - by Pieter B
    I'm a programmer but also have worked as an archivist. As archivist it's a lot about keeping data. I often get into arguments with colleagues when it comes to operations on data. I don't like the U and the D in CRUD too much. Rather then update a record I prefer to add a new one and have a reference to the old record. That way you build a history of changes. I also don't like deleting records but rather mark them as inactive. Is there a term for this? Basically only creating and reading data? Are there examples of this approach?

    Read the article

  • Architecture for interfacing multiple applications

    - by Erwin
    Let's say you have a Master Database and a few External/Internal applications that use WebServices to interface data. What would be your preferred architecture to interface data from and to those applications? Would you put some sort of Enterprise Service Bus in between? Like BizTalk? Or something cheaper? We don't want to block applications while they are interfacing, but we do want to use return codes from the interfaces to determine if we need to take some actions in the originating application or not.

    Read the article

  • Teaching myself, as a physicist, to become a better programmer

    - by user787267
    I've always liked physics, and I've always liked coding, so when I got the offer for a PhD position doing numerical physics (details are not relevant, it's mostly parallel programming for a cluster) at a university, it was a no-brainer for me. However, as most physicists, I'm self taught. I don't have broad background knowledge about how to code in an object oriented way, or the name of that specific algorithm that optimizes the search in some kD tree. Since all my work so far has been more concerned about the physics and the scientific results, I undoubtedly have some bad habits - more so because my coding is my own, and not really teamwork. I have mostly used C since it is very straightforward and "what you write is what you get" - no need for fancy abstractions. However, I have recently switched to C++ since I'd like to learn more about the power that comes with abstraction, and it's pretty C-like (syntax-wise at least). How do I teach myself to code in a good, abstract way like a graduate in computer science? I know my code is efficient, but I want it to be elegant as well, and readable. Keep in mind that I don't have time to read several 1000-page tomes about abstract programming. I need to spend time on actual, physics related research (my supervisor would laugh at me if he knew I spent time thinking about how to program elegantly). How do I assess if my work is also good from a programmer's perspective?

    Read the article

  • Web Frameworks caring about persistence?

    - by Mik378
    I have noticed that Play! Framework encompasses persistence strategy (like JPA etc...) Why would a web framework care about persistence ?! Indeed, this would be the job of the server-sides components (like EJB etc...), wouldn't this? Otherwise, client would be too coupled with server's business logic. UPDATE: One answer would be : it's more likely used for simple application including itself the whole business logics. However, for large applications with well-designed layers(services, domain, DAO's etc..), persistence is not recommended within web client layer since there would be several different web(or not) clients.

    Read the article

  • SQL Query Builder/Designer and code Formating

    - by DavRob60
    I write SQL query every now and then, I could easily write them freehand, but sometimes I do create SQL queries using SQL Query Designers for various reason. (I wont start to enumerate them here and/or argue about their usefulness, so let's just say they are sometime useful.) Anyway, I currently use 2 Query Designers : SQL server management studio's Query Designer. Visual Studio 2010's Query Builder (must often within the Table adapter Query Configuration Wizard.) There's something I hate about those two (I don't know about the others), it's the way they throw away my Code formatting of SQL queries after an edit. Is there any way to configure something to automatically reformat the SQL output or is there any external tool/plug-in that I could use to do that job?

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • Microsoft Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how, today, I might try to make the distinction when talking to someone: "ProjectA is a managed .NET C++ project" and "ProjectB is an unmanaged native C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • Where does the "mm" come from in GTKmm, glibmm, etc

    - by Cole Johnson
    I understand that the "mm" suffix [in various GTK-associated C++ binding libraries] means "minus minus," but where exactly does it come from? I understand that there is a programming language called "C--," but if there were bindings (and I'm pretty sure I've seen some), they would be suffixed "--". TL;DR: Is there some page on gnu.org that explains the "mm" suffix in various C++ bindings or is it just a de facto standard adopted by the open source community with no reasoning behind it?

    Read the article

  • Methods to Manage/Document "one-off" Reports

    - by Jason Holland
    I'm a programmer that also does database stuff and I get a lot of so-called one-time report requests and recurring report requests. I work at a company that has a SQL Server database that we integrate third-party data with and we also have some third-party vendors that we have to use their proprietary reporting system to extract data in flat file format from that we don't integrate into SQL Server for security reasons. To generate many of these reports I have to query data from various systems, write small scripts to combine data from the separate systems, cry, pull my hair, curse the last guy's name that made the report before me, etc. My question is, what are some good methods for documenting the steps taken to generate these reports so the next poor soul that has to do them won't curse my name? As of now I just have a folder with subfolders per project with the selects and scripts that generated the last report but that seems like a "poor man's" solution. :)

    Read the article

  • Coded ui to measure performance

    - by Mike Weber
    I have been tasked with using coded UI to measure performance on a proprietary windows desktop application. The need is to measure how long it takes for the next page/screen to display after a user clicks on a control. For example - a user enters their ID and PW and clicks sign-in. The need is to measure how long it takes for the next screen to display when the user clicks the sign-in button. I understand the need to define what indicates the screen is loaded and ready for use. One approach is to use control.WaitForControlReady and use BeginTimer/EndTimer. Is coded ui a dependable and accurate way of measuring time? Is WaitForControlReady the best method to determine when a control is ready for use?

    Read the article

  • Inspiring Computer Science College Student Stories?

    - by funk-shun
    After watching "The Social Network", a movie about Mark Zuckerberg and Facebook, I had a productivity spike as it inspired me to learn many different languages and software. That spike has been deteriorating somewhat and I don't feel like watching the movie all over again so I was wondering if anybody had any other similar success stories/links/articles/whatever. Mainly about successes that started for people in college.

    Read the article

  • How to ask the boss to pay for training courses

    - by jiceo
    Recently I came upon a well known local consulting company that has some interesting courses I'd like to take. The course is not cheap enough for me to pay out of my own pocket and not feel bad afterwards. The thing is that my startup company uses one set of framework (Python+Django) for most of the stuff I have to deal with, but the course covers Ruby on Rails 3. Since I've not had exposure to Ruby on Rails, and after seeing so many people speak highly of the course, I really thought it would be a good opportunity. I know that I'd have to approach my boss at the angle of 'how this might benefit the company' but other than this, any suggestions?

    Read the article

  • Code Style - Do you prefer to return from a function early or just use an IF statement?

    - by Rachel
    I've often written this sort of function in both formats, and I was wondering if one format is preferred over another, and why. public void SomeFunction(bool someCondition) { if (someCondition) { // Do Something } } or public void SomeFunction(bool someCondition) { if (!someCondition) return; // Do Something } I usually code with the first one since that is the way my brain works while coding, although I think I prefer the 2nd one since it takes care of any error handling right away and I find it easier to read

    Read the article

  • What design patterns are the worst or most narrowly defined?

    - by Akku
    For every programming project, Managers with past programming experience try to shine when they recommend some design patterns for your project. I like design patterns when they make sense or if you need a scalbale solution. I've used Proxies, Observers and Command patterns in a positive way for example, and do so every day. But I'm really hesitant to use say a Factory pattern if there's only one way to create an object, as a factory might make it all easier in the future, but complicates the code and is pure overhead. So, my question is in respect to my future career and my answer to manager types throwing random pattern-names around: Which design patterns did you use, that threw you back overall? Which are the worst design patterns, that you shouldn't have a look at if it's not that only single situation where it makes sense (read: which design patterns are very narrowly defined)? (It's like I was looking for the negative reviews of an overall good product of amazon to see what bugged people most in using design patterns). And I'm not talking about Anti-Patterns here, but about Patterns that are usually thought of as "good" patterns. Edit: As some answered, the problem is most often that patterns are not "bad" but "used wrong". If you know patterns, that are often misused or even difficult to use, they would also fit as an answer.

    Read the article

  • Music Notation Editor - Refactoring view creation logic elsewhere

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • How to deal with a boss who is extremely competitive [closed]

    - by user72101
    Okay, so I recently joined this group. My boss who I reported to, left a while ago and my colleague was made the team lead. We both now report to the same head, but she heads the team I work in. I am not sure on how to deal with various situations. I would imagine a team lead who motivates the rest of the team, and not someone who would take credit for what others do. I am the only one who works at the same geographic location. She is very smart, no doubt about it. Things get to me, for example I work on an issue and email a dev team, she would respond on top of my emails and as she is the lead, people respond to her rather than to me. If there is an email addressed with the head on it, she would get it to faster than I do, even when she knew I am looking into it. The rest of the team keeps asking me questions or help. I find it hard to say no, and I try to help them as i feel I might need their help someday as well. I slack in my own work or I could have used the time to learn things. I feel that she should be the go to person and not me. If one gets the benefits of being a lead, why not the work that goes with it? I feel I am stuck in a very unfortunate situation and am totally helpless about it. Everyday it is something or the other. I work on a project for months and finally when it is about to go live and there was a minor issue and the head started asking questions, she completely stole my thunder on a call. She would not let me speak and I have to actually cut her off to speak up. The ideal would be not to give it a thought and just do the best I can,but I feel I am capable of much more and not able to give my best because of this constant war at work. Neetu

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >