Search Results

Search found 5166 results on 207 pages for 'cost benefit'.

Page 168/207 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Do you like languages that let you put the "then" before the "if"?

    - by Matt Hamilton
    I was reading through some C# code of mine today and found this line: if (ProgenyList.ItemContainerGenerator.Status != System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated) return; Notice that you can tell without scrolling that it's an "if" statement that works with ItemContainerGenerator.Status, but you can't easily tell that if the "if" clause evaluates to "false" the method will return at that point. Realistically I should have moved the "return" statement to a line by itself, but it got me thinking about languages that allow the "then" part of the statement first. If C# permitted it, the line could look like this: return if (ProgenyList.ItemContainerGenerator.Status != System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated); This might be a bit "argumentative", but I'm wondering what people think about this kind of construct. It might serve to make lines like the one above more readable, but it also might be disastrous. Imagine this code: return 3 if (x > y); Logically we can only return if x y, because there's no "else", but part of me looks at that and thinks, "are we still returning if x <= y? If so, what are we returning?" What do you think of the "then before the if" construct? Does it exist in your language of choice? Do you use it often? Would C# benefit from it?

    Read the article

  • Will my LinqToSql execution be deffered if i filter with IEnumerable<T> instead of IQueryable<T>?

    - by cottsak
    I have been using these common EntityObjectFilters as a "pipes and filters" way to query from a collection a particular item with an ID: public static class EntityObjectFilters { public static T WithID<T>(this IQueryable<T> qry, int ID) where T : IEntityObject { return qry.SingleOrDefault<T>(item => item.ID == ID); } public static T WithID<T>(this IList<T> list, int ID) where T : IEntityObject { return list.SingleOrDefault<T>(item => item.ID == ID); } } ..but i wondered to myself: "can i make this simpler by just creating an extension for all IEnumerable<T> types"? So i came up with this: public static class EntityObjectFilters { public static T WithID<T>(this IEnumerable<T> qry, int ID) where T : IEntityObject { return qry.SingleOrDefault<T>(item => item.ID == ID); } } Now while this appears to yield the same result, i want to know that when applied to IQueryable<T>s will the expression tree be passed to LinqToSql for evaluating as SQL code or will my qry be evaluated in it's entirety first, then iterated with Funcs? I'm suspecting that (as per Richard's answer) the latter will be true which is obviously what i don't want. I want the same result, but the added benefit of the delayed SQL execution for IQueryable<T>s. Can someone confirm for me what will actually happen and provide simple explanation as to how it would work?

    Read the article

  • Are certain open-source licenses more suitable than others for career growth?

    - by Francisco Garcia
    As a software engineer/programmer myself, I love the possibility to download the code and learn from it. However building software is what brings food to my table. I have doubts regarding the type of license I should use for my own personal projects or when picking up one project to learn from. There are already many questions about licenses on Stackoverflow, but I would like to make this one much more specific. If your main profession and way of living is building software: which type of license do you find more useful for you? And I mean, the license that can benefit you most as a professional because it gives you more freedom to reuse the experience you gain. GPL is a great license to build communities because it forces you to give back your work. However I like BSD licenses because of their extra freedom. I know that if the code I am exploring is BSD licensed, I might be able to expand not only my skills, but also my programmer toolbox. Whenever I am working for a company, I might recall that something similar was done in another project and I will be able to copy or imitate certain part of the code. I know that there are religious wars regarding GPL vs BSD and it is not my intention to start one. Probably many companies already take snipsets from GPL projects anyway. I just want to insist in the factor of professional enrichment. I do not intend to discriminate any license. I said I prefer BSD licenses but I also use Linux because the user base is bigger and also the market demand.

    Read the article

  • Socket isn't listed by netstat unless using certain ports

    - by illuzive
    I'm a computer science student with a few years of programming experience. Yesterday, while working on a project (Mac OS X, BSD sockets) at school, I encountered a strange problem. I was adding several modules to a very basic "server" (mostly a bunch of functions to set up and manage an UDP socket on a certain port). While doing this, I started the server from time to time in order to see that everything worked like it should. I've been using port 32000 during the development of the server. When I start the server and run netstat, the socket is listed as expected. > netstat -p UDP | grep 32000 udp46 0 0 *.32000 *.* However, when I run the server on other ports (random (10000 - 50000)), it's not listed by netstat. My thought was that I had somehow hard coded the port somewhere in the code, but that's not the case. The thing is - I can connect to the socket on any of the tested ports, and it reads data sent to it without any problem at all. It just doesn't get listed by netstat. What I wonder, is if anyone of you have any idea of why this happens? Note: Although this is a project at school, it's not homework. This is just something I want to understand for my own benefit.

    Read the article

  • What can a company possibly gain by making Android phones hard to root?

    - by Chinmay Kanchi
    As someone who recently got a HTC Hero, I had to jump through several hoops to get root access on the phone to install custom firmware. Now, Android is open-source and fairly easy to build and hack on an emulator. It seems to be against the spirit of open-source to lock down a phone so you can't hack the phone itself. Now, often, there are understandable (though not always justifiable) reasons for locking a device down. For example, it might have proprietary software on it or you might want to retain control of the platform. However, Android by its open-source nature makes such concerns moot. Everyone and their dog has access to the userland code, and HTC is forced by the GPL to release kernel sources for each of their devices. So, I fail to see any motivation for alienating the hackers, when there is no possible benefit (in my mind) to be had from doing this. Any idea why a company would want to do this? Is it just short-sightedness or am I missing possible commercial implications of this?

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Presenting collection of structures of strings in grid or similar in WPF - Example? Ideas?

    - by Andrew
    Hi, I have a collection of structures. The structure is just some strings. Example public struct ReportLine { public string Name; public string Address; public string Phone; ...// about 10 other strings } I can't change this part. What I want to do is display it in a simple grid or simlar in WPF. My only requirements are: a) need column headers b) rows must alternate in color c) columns big enough to hold largest datum (which is not know until run time) Can someone point me to an example to get me started? Is the GridView the way to go? Or DataGrid? Or perhaps just the grid? I have the book Pro WPF in C# 2008 and it covers binding ListBox's to collections, but the collections always seem to be collections of one field (ex. a collection of 40 names). Here I have a collection (an array in fact) of a structure. How do I setup the databinding? As you can see, I'm new to this and probably would most benefit from a reference to an article. I've found articles covering collections of 1 field, but no examples covering binding to an array of structures. Also, my intitial research indicates c) can't be easily done, if you demand that the column is big enough for all the data in that column, not just the visible data. Thanks, dave

    Read the article

  • Technique for selectively formatting data in a PowerShell pipeline and output as HTML

    - by halr9000
    Say that you want to do some fancy formatting of some tabular output from powershell, and the destination is to be html (either for a webserver, or to be sent in an email). Let's say for example that you want certain numeric values to have a different background color. Whatever. I can think of two solid programmatic ways to accomplish this: output XML and transform with XSLT, or output HTML and decorate with CSS. XSLT is probably the harder of the two (I say that because I don't know it), but from what little I recall, it has the benefit of bring able to embed the selection criteria (xpath?) for aforementioned fancy formatting. CSS on the other hand needs a helping hand. If you wanted a certain cell to be treated specially, then you would need to distinguish it from its siblings with a class, id, or something along those lines. PowerShell doesn't really have a way to do that natively, so that would mean parsing the HTML as it leaves convertto-html and adding, for example, a "emphasis" class: <td class="emphasis">32MB</td> I don't like the idea of the required text parsing, especially given that I would rather be able to somehow emphasize what needs emphasizing in Powershell before it hits HTML. Is XSLT the best way? Have suggestions for how to markup the HTML after it leaves convertto-html or ideas of a different way?

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • Is it poor design to create objects that only execute code during the constructor?

    - by Curtix
    In my design I am using objects that evaluate a data record. The constructor is called with the data record and type of evaluation as parameters and then the constructor calls all of the object's code necessary to evaluate the record. This includes using the type of evaluation to find additional parameter-like data in a text file. There are in the neighborhood of 250 unique evaluation types that use the same or similar code and unique parameters coming from the text file. Some of these evaluations use different code so I benefit a lot from this model because I can use inheritance and polymorphism. Once the object is created there isn't any need to execute additional code on the object (at least for now) and it is used more like a struct; its kept on a list and 3 properties are used later. I think this design is the easiest to understand, code, and read. A logical alternative I guess would be using functions that return score structs, but you can't inherit from methods so it would make it kind of sloppy imo. I am using vb.net and these classes will be used in an asp.net web app as well as in a distributed app. thanks for your input

    Read the article

  • are C functions declared in <c____> headers gauranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); }

    Read the article

  • python os.mkfifo() for Windows

    - by user302099
    Hello. Short version (if you can answer the short version it does the job for me, the rest is mainly for the benefit of other people with a similar task): In python in Windows, I want to create 2 file objects, attached to the same file (it doesn't have to be an actual file on the hard-drive), one for reading and one for writing, such that if the reading end tries to read it will never get EOF (it will just block until something is written). I think in linux os.mkfifo() would do the job, but in Windows it doesn't exist. What can be done? (I must use file-objects). Some extra details: I have a python module (not written by me) that plays a certain game through stdin and stdout (using raw_input() and print). I also have a Windows executable playing the same game, through stdin and stdout as well. I want to make them play one against the other, and log all their communication. Here's the code I can write (the get_fifo() function is not implemented, because that's what I don't know to do it Windows): class Pusher(Thread): def __init__(self, source, dest, p1, name): Thread.__init__(self) self.source = source self.dest = dest self.name = name self.p1 = p1 def run(self): while (self.p1.poll()==None) and\ (not self.source.closed) and (not self.source.closed): line = self.source.readline() logging.info('%s: %s' % (self.name, line[:-1])) self.dest.write(line) self.dest.flush() exe_to_pythonmodule_reader, exe_to_pythonmodule_writer =\ get_fifo() pythonmodule_to_exe_reader, pythonmodule_to_exe_writer =\ get_fifo() p1 = subprocess.Popen(exe, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE) old_stdin = sys.stdin old_stdout = sys.stdout sys.stdin = exe_to_pythonmodule_reader sys.stdout = pythonmodule_to_exe_writer push1 = Pusher(p1.stdout, exe_to_pythonmodule_writer, p1, '1') push2 = Pusher(pythonmodule_to_exe_reader, p1.stdin, p1, '2') push1.start() push2.start() ret = pythonmodule.play() sys.stdin = old_stdin sys.stdout = old_stdout

    Read the article

  • Field Members vs Method Variables?

    - by Braveyard
    Recently I've been thinking about performance difference between class field members and method variables. What exactly I mean is in the example below : Lets say we have a DataContext object for Linq2SQL class DataLayer { ProductDataContext context = new ProductDataContext(); public IQueryable<Product> GetData() { return context.Where(t=>t.ProductId == 2); } } In the example above, context will be stored in heap and the GetData method variables will be removed from Stack after Method is executed. So lets examine the following example to make a distinction : class DataLayer { public IQueryable<Product> GetData() { ProductDataContext context = new ProductDataContext(); return context.Where(t=>t.ProductId == 2); } } (*1) So okay first thing we know is if we define ProductDataContext instance as a field, we can reach it everywhere in the class which means we don't have to create same object instance all the time. But lets say we are talking about Asp.NET and once the users press submit button the post data is sent to the server and the events are executed and the posted data stored in a database via the method above so it is probable that the same user can send different data after one another.If I know correctly after the page is executed, the finalizers come into play and clear things from memory (from heap) and that means we lose our instance variables from memory as well and after another post, DataContext should be created once again for the new page cycle. So it seems the only benefit of declaring it publicly to the whole class is the just number one text above. Or is there something other? Thanks in advance... (If I told something incorrect please fix me.. )

    Read the article

  • Should developers *really* have private offices?

    - by Aron Rotteveel
    We will probably be moving within a year, so we have to make some decisions regarding office layout. At the moment, our company is basically one big office. When our developers can't bother to be disturbed at all, we all have our own headphones to mute the outside world. Still, it seems a lot of people feel that private offices are no doubt the way to go. From Joel's article Private Offices Redux: Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space. Don't fall for it. They also want M&Ms for breakfast and a pony. Open space is fun but not productive. Even though I can understand the benefit on productivity, does having a private office really result in more net productivity? There seem to be plenty of companies that create wide open spaces and still maintain good productivity. Or so it seems. (I should mention many of them use cubicles, though) What is your opinion on this? What does your company do? Is there some middle ground in this? Some more related information on this matter: Private Offices Redux The new Fog Creek office A Field Guide to Developers Gmail recruitment page. Found this last one somewhat remarkable since the Gmail recruitment page promotes the "wide open space" idea.

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • Should I learn to code?

    - by saltcod
    Hi All, This is more of a philosophical question than a technical one, but I’d like some opinions on it, and I think that there are many others in my position that would benefit. My issue is that I don’t really have time to learn how to code. I know, I know… no one has time anymore, but please hear me out. Since learning to use Drupal about 2 years ago I’ve been involved with several projects wherein I’ve become the default quasi-developer, front-end designer, site manager, and system administrator. What I’ve found is that I can produce fairly nice, feature rich Drupal sites with the wealth of contrib. modules out there (Views, CCK, image handling, etc….). BUT! I can’t code. I know enough PHP to insert something into a block, or re-word a string, but that’s about it. I still don’t really even know how arrays work. My question Succinctly, my question is: Given the time that I have available for all of this stuff – in addition to a full-time job and regular life – am I better off trying to become more expert at the front-end stuff, or should I just learn PHP already? Pros 1. If a project doesn’t use Drupal, I’ll know enough PHP to be able to participate. 2. Learning PHP would help my Drupal development too 3. Learning PHP would make front-end theming easier 4. Learning PHP should give me that missing background in programming – and should allow me to learn other languages in the future Cons 1. At 28, I know I’m not too old to learn anything. But am I too old to become ‘good’? 2. Am I better off getting better and better at front-end UX work? 3. Am I better off farming out the PHP work? Suggestions from coders welcome! Thanks Terry

    Read the article

  • Does anyone still believe in the Capability Maturity Model for Software?

    - by Ed Guiness
    Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes. But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses. So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM? And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial? Does anyone still believe? As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody

    Read the article

  • Can I get an XPathNodeIterator directly from an XPath?

    - by Val
    I hope I'm just missing something obvious. I have a number of repeating nodes in an XML document: <root> <parent> <child/> <child/> </parent> </root> I need to examine the contents of each of the <child> elements in turn, so I need an XPathNodeIterator containing the nodeset of all the <child> nodes. If I have an XPath that would select the child nodes, e.g. /root/parent/child, is there any way to feed that directly to a new XPathNodeIterator? Everything I see in the docs and examples indicates I have to first get an XPathNavigator to the <parent>, then Select the child nodes, like: XPathNavigator nav = datasource.CreateNavigator().SelectSingleNode( "/root/parent" ); XPathNodeIterator it = nav.Select( "./child" ); foreach ( child in it ) { /* do something */ } I was hoping to skip the XPathNavigator, and intialize the XPathNodeIterator with XPath to the child nodes directly, something like: XpathNodeIterator it = new XpathNodeIterator("/root/parent/child"); foreach ( child in it ) { /* do something */ } Possible? The benefit is not only saving a line of code, but I can use a single XPath expression, rather than splitting the path to the <child> nodes in two, first to get the parent element, then to select its children.

    Read the article

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • How would you organize this in asp.net mvc?

    - by chobo
    I have an asp.net mvc 2.0 application that contains Areas/Modules like calendar, admin, etc... There may be cases where more than one area needs to access the same Repo, so I am not sure where to put the Data Access Layers and Repositories. First Option: Should I create Data Access Layer files (Linq to SQL in my case) with their accompanying Repositories for each area, so each area only contains the Tables, and Repositories needed by those areas. The benefit is that everything needed to run that module is one place, so it is more encapsulated (in my mind anyway). The downside is that I may have duplicate queries, because other modules may use the same query. Second Option Or, would it be better to place the DAL and Repositories outside the Area's and treat them as Global? The advantage is I won't have any duplicate queries, but I may be loading a lot of unnecessary queries and DAL tables up for certain modules. It is also more work to reuse or modify these modules for future projects (though the chance of reusing them is slim to none :)) Which option makes more sense? If someone has a better way I'd love to hear it. Thanks!

    Read the article

  • What's the best approach for modifying PDF interactive form fields on iOS?

    - by gbreen
    I've been doing some head banging on this one and solicit your advice. I am building an app that as part of it's features is to present PDF forms; meaning display them, allow fields to be changed and save the modified PDF file back out. UIWebViews do not support PDF interactive forms. Using the CGPDF apis (and benefit from other questions posted here and elsewhere), I can certainly present the PDF (without the form fields/widgets), scan and find the fields in the document, figure out where on the screen to draw something and make them interactive. What I can't seem to figure out is how to change the CGPDFDictionary objects and write them back out to a file. One could use the CGPDF Apis to create a new PDF document from whole cloth, but how do you use it to modify an existing file? Should I be looking elsewhere such as 3rd party PDF libs like PoDoFo or libHaru? I'd love to hear from anyone who has successfully modified a PDF and written it back out as to your approach. Thanks!

    Read the article

  • what is the best practice approach for n-tier application development with entity framework?

    - by samsur
    I am building an application using entity framework. I am using the T4 template to generate self tracking entities. Currently, I am thinking of creating the entity framework code in a separate project. In this same project, I would have partial classes with additional methods for the entities. I am thinking of creating a separate project for a service layer (WCF) with methods for the upper/presentation tier. The WCF layer will reference the entity framework project. The methods in the WCF layer will return the entities or accept the entities as the parameters. I am thinkg of creating a third project for the presentation layer (ASP.net), this will make calls to the WCF service but will also need to reference the entities as the WCF methods take these types as the parameters/return types. In short, i want to use the STE entities generated by the T4 template as a DTO to be used in all layers. I was originally thinking of creating a business logic layer that maps to each entities. Example: If i have a customer class, the Business Layer would have a CustomerBLL class and then methods in the customerBLL will be used by the service layer. I was also trying to create a DTO in this business layer. I however found that this approach is very time consuming and i do not see a major benefit as it would create more maintenance work. What is the best practice for n-tier application development using entity framework 4?

    Read the article

  • Develop multiple very similar projects at once

    - by Raveren
    I am developing a semi-complicated site that is available in several countries at once. Much effort has been put in to make the code bases as similar as possible to one another and ultimately only the config file and some representational data will differ between them. Each project has its own SVN repository which maps directly to a live test site. That part is handled by the IDE we use to work. Now I am in need to create a some sort of system to keep all these projects in sync. The best theoretical solution so far is to create a local hook script that would fire on committing and Merge the committed files from the project that is being committed to all other projects Optionally upload them to the live site, replacing previous files The first problem is that I don't know how I would do the merging - I guess it would be like applying a SVN patch or something. The second is if I do not want to upload the changes to the live server, how would I go about synching the live and local code bases (replace older files?). I am posting this question, not going through the potentially huge trouble of solving the aforementioned problems myself is that I believe this is a pretty common situation and someone would already have a solution and others may benefit from the answers in the future. Lastly, I'm on windows7, develop PHP and use tortoiseSVN.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >