Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 24/371 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Entity/Component based engine rendering separation from logic

    - by Denis Narushevich
    I noticed in Unity3D that each gameObject(entity) have its own renderer component, as far I understand, such component handle rendering logic. I wonder if it is a common practice in entity/component based engines, when single entity have renderer components and logic components such as position, behavior altogether in one box? Such approach sound odd to me, in my understanding entity itself belongs to logic part and shouldn't contain any render specific things inside. With such approach it is impossible to swap renderers, it would require to rewrite all that customized renderers. The way I would do it is, that entity would contain only logic specific components, like AI,transform,scripts plus reference to mesh, or sprite. Then some entity with Camera component would store all references to object that is visible to the camera. And in order to render all that stuff I would have to pass Camera reference to Renderer class and render all sprites,meshes of visible entities. Is such approach somehow wrong?

    Read the article

  • Gradual approaches to dependency injection

    - by JW01
    I'm working on making my classes unit-testable, using dependency injection. But some of these classes have a lot of clients, and I'm not ready to refactor all of them to start passing in the dependencies yet. So I'm trying to do it gradually; keeping the default dependencies for now, but allowing them to be overridden for testing. One approach I'm conisdering is just moving all the "new" calls into their own methods, e.g.: public MyObject createMyObject(args) { return new MyObject(args); } Then in my unit tests, I can just subclass this class, and override the create functions, so they create fake objects instead. Is this a good approach? Are there any disadvantages? More generally, is it okay to have hard-coded dependencies, as long as you can replace them for testing? I know the preferred approach is to explicitly require them in the constructor, and I'd like to get there eventually. But I'm wondering if this is a good first step.

    Read the article

  • Software Design for Product Verticals and Service Verticals

    - by Rachel
    In every industry there are two verticals Product Vertical and Service Vertical, so my question is: How does design approach changes while designing Software for Product Vertical as compared to developing Software for Service Vertical ? What are the pros and cons for each case ? Also, in case of Product Vertical, How you go about designing Product or Features and what are steps involved ? Lastly, I was reading How Facebook Ships Code article and it appears that Product Managers have very little influence on how Product is developed and responsibility lies mainly with the Developer for the feature. So is this good practice and why one would go for this approach ? What would be your comment on this kind of approach ?

    Read the article

  • Single IBAction for multiple UIButtons versus single IBAction for single UIButton

    - by Miraaj
    While using story-board there are two different approaches which my team mates follow: Approach 1: To bind unique action with each button, ie: Done button - binded to - doneButtonAction Cancel button - binded to - cancelButtonAction OR Approach 2: To bind single action to multiple buttons, ie: Done button - binded to - commonButtonAction Cancel button - binded to - commonButtonAction Then in commonButtonAction they prefer to use switch case like this: - (IBAction)commonButtonAction:(id)sender { UIButton *button = (UIButton *)sender; switch (button.tag) { case 201: // done button [self doneButtonAction:sender]; break; case 202: // cancel button [self cancelButtonAction:sender]; break; default: break; } } - (void)cancelButtonAction:(id)sender { // no interesting stuff, simple dismiss of view :-( } - (void)doneButtonAction:(id)sender { // some interesting stuff ;-) } Reasoning which they give to follow approach 2 is - in each view controller during code walk through anyone can easily identify where to find code related to button actions. While others discard this idea because they say that adding an extra switch case is unnecessary and is not a common practice. What are your views?

    Read the article

  • How can I best study a problem to determine whether recursion can/should be used?

    - by user10326
    In some cases, I fail to see that a problem could be solved by the divide and conquer method. To give a specific example, when studying the find max sub-array problem, my first approach is to brute force it by using a double loop to find the max subarray. When I saw the solution using the divide and conquer approach which is recursion-based, I understood it but ok. From my side, though, when I first read the problem statement, I did not think that recursion is applicable. When studying a problem, is there any technique or trick to see that a recursion based (i.e. divide and conquer) approach can be used or not?

    Read the article

  • How a "Collision System" should be implemented?

    - by nathan
    My game is written using a entity system approach using Artemis Framework. Right know my collision detection is called from the Movement System but i'm wondering if it's a proper way to do collision detection using such an approach. Right know i'm thinking of a new system dedicated to collision detection that would proceed all the solid entities to check if they are in collision with another one. I'm wondering if it's a correct way to handle collision detection with an entity system approach? Also, how should i implement this collision system? I though of an IntervalEntitySystem that would check every 200ms (this value is chosen regarding the Artemis documentation) if some entities are colliding. protected void processEntities(ImmutableBag<Entity> ib) { for (int i = 0; i < ib.size(); i++) { Entity e = ib.get(i); //check of collision with other entities here } }

    Read the article

  • Is it worth it to switch from home-grown remote command interface to using JMX

    - by Sam Goldberg
    Without knowing too much about JMX, I've always assumed that it would be the best approach for building in remote management to our standalone Java server application. Our server application has some minimal remote control capability, using text commands sent via TCP/IP socket to it. Using the home grown approach, it is fairly to add a new command. (Just create new command text, and the code to handle that in the message receiver). On the other hand, we have hardly implemented any commands, even though there are many things we would like to be able to execute remotely. I am trying to weigh the value of moving to incorporating JMX (learning it, and building the interfaces), versus just sticking with the home-grown approach. Does anyone have any experience or advice regarding changing an existing application to use JMX?

    Read the article

  • What is the most effective way to add functionality to unfamiliar, structurally unsound code?

    - by Coder
    This is probably something everyone has to face during the development sooner or later. You have an existing code written by someone else, and you have to extend it to work under new requirements. Sometimes it's simple, but sometimes the modules have medium to high coupling and medium to low cohesion, so the moment you start touching anything, everything breaks. And you don't feel that it's fixed correctly when you get the new and old scenarios working again. One approach would be to write tests, but in reality, in all cases I've seen, that was pretty much impossible (reliance on GUI, missing specifications, threading, complex dependencies and hierarchies, deadlines, etc). So everything sort of falls back to good ol' cowboy coding approach. But I refuse to believe there is no other systematic way that would make everything easier. Does anyone know a better approach, or the name of the methodology that should be used in such cases?

    Read the article

  • iOS: Versioned static frameworks vs Git Submodules and included code

    - by drekka
    For the last couple of years I've been building static frameworks of common APIs for my iOS projects. I can build a universal binary containing all the architectures (i386, armv6, armv7) and wrap it up in a .framework directory structure. I then stored this in a directory based on the version of the framework. For example ..../myAPI/v0.1.0/myAPI.framework Once I have this framework I can then easily add it to a project and if I want to advance the version, merely change the framework search paths to the later version. This works, but the approach is very similar to what I would use in the Java world. Recently I've been reading about using Git submodules and static framework sub projects in XCode 4. Im wondering if my currently approach is something that I should consider retiring and what the pros/cons are of the new approach. I'm weary of just including code because I've already had issues in a work project which had (effectively) multiple versions of a third party API. Any opinions?

    Read the article

  • Building a Roadmap for an IAM Platform

    - by B Shashikumar
    Identity Management is no longer a departmental solution, it has become a strategic part of every organization's security posture. Enterprises require a forward thinking Identity Management strategy. In our previous blog post on "The Oracle Platform Approach", we discussed a recent study by Aberdeen which showed that organizations taking a platform approach can reduce cost by as much as 48% and have 35% fewer audit deficiencies. So how does an organization get started with an Identity Management (IAM) Platform? What are the components of such a platform and how can an organization continuously evolve it for better ROI and IT agility. What are some of the best practices to begin an IAM deployment? To find out the answers and to learn how ot build a comprehensive IAM roadmap, check out this presentation which discusses how Oracle can provide a quick start to your IAM program.  Platform approach-series-building a-roadmap-finalv1 View more presentations from OracleIDM

    Read the article

  • Is acousting fingerprinting too broad for one audio file only?

    - by IBG
    So we were looking for some topics related to audio analysis and found acoustic fingerprinting. As it is, it seems like the most famous application for it is for identification of music. Enter our manager, who requested us to research and possible find an algorithm or existing code that we can use for this very simple approach (like it's easy, source codes don't show up like mushrooms): Always-on app for listening Compare the audio patterns to a single audio file (assume sound is a simple beep) If beep is detected, send notification to server With a flow this simple, do you think acousting fingerprinting is a broad approach to use? Should we stop and take another approach? Where to best start? We haven't started anything yet (on the development side) on this regard, so I want to get other opinion if this is pursuit is worth it or moot.

    Read the article

  • Images from remote source - is it possible or is it bad practice?

    - by user1620696
    I'm building a management system for websites and I had an idea related to image galleries. I'm not sure it's a good approach. Since image space needed is related to how many images a user might upload, I thought about using cloud services like DropBox, Mega, and Google Drive to store images and load then when needed. The obvious problem with this approach is that downloading the images from the 3rd party service would hamper the user experience due to the increased download times. Is there any way to save images belonging in an image gallery on remote source without hampering the user experience because of speed? Or is this approach really not a good practice?

    Read the article

  • Is acoustic fingerprinting too broad for one audio file only?

    - by IBG
    We were looking for some topics related to audio analysis and found acoustic fingerprinting. As it is, it seems like the most famous application for it is for identification of music. Enter our manager, who requested us to research and possible find an algorithm or existing code that we can use for this very simple approach (like it's easy, source codes don't show up like mushrooms): Always-on app for listening Compare the audio patterns to a single audio file (assume sound is a simple beep) If beep is detected, send notification to server With a flow this simple, do you think acoustic fingerprinting is a broad approach to use? Should we stop and take another approach? Where to best start? We haven't started anything yet (on the development side) on this regard, so I want to get other opinion if this is pursuit is worth it or moot.

    Read the article

  • Mobile Compatibility: traditional website look vs native application ?

    - by Siddiqui
    I have a question related to mobile compatible websites, I have seen two type of websites One in which they adopt traditional website look and adjust website according to mobile screen, if they have lot of information which can not be adjust according to screen then they expand the height of page, so that user can scroll the page to see more information... In the second approach they used native application look means use navigation-bar, tab-bar, tool-bar, scroll-view just like in native applications. Height and width of page adjust according to screen size, if they have more information then they use scroll-view etc... My question is: which approach is better then other, in which approach you feel more comfortable to use website.

    Read the article

  • Two approaches to adding freelance/contract work to resume [on hold]

    - by melhosseiny
    Approach A Title, Company A Freelance + Title, Company B Title, Company C Freelance + Title, Company D Title, Intern, Company E Approach B Title, Company A Title, Company B Title, Self Title, Intern, Company D In approach B, you would list all freelance/contract work you did under the "Title, Self" experience. For example: Company A Project 1 Project 2 Company B Project 1 Question Which of these two approaches is better? And why? Update I think there's value in this question to the community as it relates specifically to programmers. I'd think that handling this issue on a resume is career-specific. Also, I've found similar questions on the site: Referring to freelance marketplaces as evidence of the experience for a potential full-time employer How to write freelancing in resume for programmers job In any case, I don't think it should be closed. It should be migrated to The Workplace or Freelancing.

    Read the article

  • Exception Handling Frequency/Log Detail

    - by Cyborgx37
    I am working on a fairly complex .NET application that interacts with another application. Many single-line statements are possible culprits for throwing an Exception and there is often nothing I can do to check the state before executing them to prevent these Exceptions. The question is, based on best practices and seasoned experience, how frequently should I lace my code with try/catch blocks? I've listed three examples below, but I'm open to any advice. I'm really hoping to get some pros/cons of various approaches. I can certainly come up with some of my own (greater log granularity for the O-C approach, better performance for the Monolithic approach), so I'm looking for experience over opinion. EDIT: I should add that this application is a batch program. The only "recovery" necessary in most cases is to log the error, clean up gracefully, and quit. So this could be seen to be as much a question of log granularity as exception handling. In my mind's eye I can imagine good reasons for both, so I'm looking for some general advice to help me find an appropriate balance. Monolitich Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } catch (Exception e) { Log(e); } finally { CleanUp(); } } public static void Step1(){ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } public static void Step2(){ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } public static void Step3(){ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } } Delegated Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Obsessive-Compulsive Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Other approaches welcomed and encouraged. Above are examples only.

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Sorting, Filtering and Paging in ASP.NET MVC

    - by ali62b
    What is the best approach to implement these features and which part of project would involved? I see some example of JavaScript grids, but I'm talking about a general approach which best fits the MVC architecture. I've considered configuring routes and models to implement these features but I don't have a clear idea that if this is the right approach to implementing such features. On the one hand, I think if we put logic in routes (item/page/sort/), we would have benefits like bookmarking and avoiding JavaScript. On the other hand if we use JavaScript grids, we can have behavior like the old school grid views in ASP.NET web forms. I find that using HTML helpers may be useful for paging, but have no idea if they are good for sorting or not. I've looked at jQuery, tableSorter and quick search plug-ins, but they work just on the currently-fetched data and won't help in real sorting and filtering that may need to touch the database. I have some thoughts on using these tools side by side with AJAX to get something which works, but I have no idea if there are similar efforts done yet anywhere. Another approach I looked at was using Dynamic Data on web forms, but I didn't find any suggestions out there as to whether or not it is a good idea to integrate MVC and DD. I know implementing filtering and sorting for an individual case is simple (although it has some issues like using Dynamic LINQ, which is not yet a standard approach), but creating a sorting or filtering tool which works in all cases is the idea I'm looking for. (Maybe this is because I want have something in hand when web form developers are wondering why I'm writing same code each time I want to implement a sort scenario for different Entities).

    Read the article

  • Setting multiple SMTP settings in web.config?

    - by alphadogg
    I am building an app that needs to dynamically/programatically know of and use different SMTP settings when sending email. I'm used to using the system.net/mailSettings approach, but as I understand it, that only allows one SMTP connection definition at a time, used by SmtpClient(). However, I need more of a connectionStrings-like approach, where I can pull a set of settings based on a key/name. Any recommendations? I'm open to skipping the tradintional SmtpClient/mailSettings approach, and I think will have to...

    Read the article

  • Could the cause of recent Toyota computing problems be an interface mismatch ?

    - by Spux
    Any ideas if the recent Toyota computing errors had something to do with the fact they where using an object orintated approach then took a data orientated approach thus causing user interface errors ? Studying programming languages with in interface robotic design and wondered if the car computing glitch Toyota has been having could have something to do with using a different programming approach with out reprogramming the whole system from scratch.

    Read the article

  • Creating an SQL Compact file: Template or script?

    - by David Veeneman
    I am writing an application that writes to SQL Compact files that have a specific schema, and I am now implementing the New File use case. The simplest approach seems to be to use a Template pattern: first, create a template file that lives in the application directory. Then, when the user selects New File, the template is copied to the name and destination specified by the user in a New File dialog. The alternative is a scripted approach: Use the same New File dialog, but dispense with the template file. Instead, create an empty SQL Compact file using the name/destination specified by the user, and then execute a T-SQL script on it from managed code. At this point, I am leaning toward the Template approach, because it is simpler. Is there any reason I should not use that approach? Thanks for your help.

    Read the article

  • Change classloader

    - by Chris
    I'm trying to switch the class loader at runtime: public class Test { public static void main(String[] args) throws Exception { final InjectingClassLoader classLoader = new InjectingClassLoader(); Thread.currentThread().setContextClassLoader(classLoader); Thread thread = new Thread("test") { public void run() { System.out.println("running..."); // approach 1 ClassLoader cl = TestProxy.class.getClassLoader(); try { Class c = classLoader.loadClass("classloader.TestProxy"); Object o = c.newInstance(); c.getMethod("test", new Class[] {}).invoke(o); } catch (Exception e) { e.printStackTrace(); } // approach 2 new TestProxy().test(); }; }; thread.setContextClassLoader(classLoader); thread.start(); } } and: public class TestProxy { public void test() { ClassLoader tcl = Thread.currentThread().getContextClassLoader(); ClassLoader ccl = ClassToLoad.class.getClassLoader(); ClassToLoad classToLoad = new ClassToLoad(); } } (it is not relevant what the InjectingClassLoader is) I'd like to make the result of "approach 1" and "approach 2" exactly same, but it looks like thread.setContextClassLoader(classLoader) does nothing and the "approach 2" always uses the system classloader (can be determined by comparing tcl and ccl variables while debugging). Is it possible to make all classes loaded by new thread use given classloader?

    Read the article

  • How to receive Email in JEE application

    - by Hank
    Obviously it's not so difficult to send out emails from a JEE application via JavaMail. What I am interested in is the best pattern to receive emails (notification bounces, mostly)? I am not interested in IMAP/POP3-based approaches (polling the inbox) - my application shall react to inbound emails. One approach I could think of would be Keep existing MTA (postfix on linux in my case) - ops team already knows how to configure / operate it For every mail that arrives, spawn a Java app that receives the data and sends it off via JMS. I could do this via an entry in /etc/aliases like myuser: "|/path/to/javahelper" with javahelper calling the Java app, passing STDIN along. MDB (part of JEE application) receives JMS message, parses it, detects bounce message and acts accordingly. Another approach could be Open a listening network socket on port 25 on the JEE application container. Associate a SessionBean with the socket. Bean is part of JEE application and can parse/detect bounces/handle the messages directly. Keep existing MTA as inbound relay, do all its security/spam filtering, but forward emails to myuser (that pass the filter) to the JEE application container, port 25. The first approach I have done before (albeit in a different language/setup). From a performance and (perceived) cleanliness point of view, I think the second approach is better, but it would require me to provide a proper SMTP transport implementation. Also, I don't know if it's at all possible to connect a network socket with a bean... What is your recommendation? Do you have details about the second approach?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >