Search Results

Search found 17191 results on 688 pages for 'programming logic'.

Page 161/688 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Would anybody recommend learning J/K/APL?

    - by ozan
    I came across J/K/APL a few months ago while working my way through some project euler problems, and was intrigued, to say the least. For every elegant-looking 20 line python solution I produced, there'd be a gobsmacking 20 character J solution that ran in a tenth of the time. I've been keen to learn some basic J, and have made a few attempts at picking up the vocabulary, but have found the learning curve to be quite steep. To those who are familiar with these languages, would you recommend investing some time to learn one (I'm thinking J in particular)? I would do so more for the purpose of satisfying my curiosity than for career advancement or some such thing. Some personal circumstances to consider, if you care to: I love mathematics, and use it daily in my work (as a mathematician for a startup) but to be honest I don't really feel limited by the tools that I use (like python + NumPy) so I can't use that excuse. I have no particular desire to work in the finance industry, which seems to be the main port of call for K users at least. Plus I should really learn C# as a next language as it's the primary language where I work. So practically speaking, J almost definitely shouldn't be the next language I learn. I'm reasonably familiar with MATLAB so using an array-based programming language wouldn't constitute a tremendous paradigm shift. Any advice from those familiar with these languages would be much appreciated.

    Read the article

  • Learning Haskell maps, folds, loops and recursion

    - by Darknight
    I've only just dipped my toe in the world of Haskell as part of my journey of programming enlightenment (moving on from, procedural to OOP to concurrent to now functional). I've been trying an online Haskell Evaluator. However I'm now stuck on a problem: Create a simple function that gives the total sum of an array of numbers. In a procedural language this for me is easy enough (using recursion) (c#) : private int sum(ArrayList x, int i) { if (!(x.Count < i + 1)) { int t = 0; t = x.Item(i); t = sum(x, i + 1) + t; return t; } } All very fine however my failed attempt at Haskell was thus: let sum x = x+sum in map sum [1..10] this resulted in the following error (from that above mentioned website): Occurs check: cannot construct the infinite type: a = a -> t Please bear in mind I've only used Haskell for the last 30 minutes! I'm not looking simply for an answer but a more explanation of it. Thanks in advanced.

    Read the article

  • Technology Plan - Which tools should I use?

    - by Armadillo
    Hi, Soon, I'll start my own software company. My primary product/solution will be a Billing/Invoice Software. In a near future, I pretend to expand this first module to an ERP. My app should be able to run as a stand-alone application and as a Web-based application (so there will be, probably two GUI for the same Database). My problem, now, is to choose the right tools; I'm talking about what programming language(s) should I use, what kind of database should I choose, and stuff like that. I'm primarily a VB6 programmer, so probably I'll choose the .net framework (vb/c#). But I'm seriously thinking about Java. Java has 2 "pros" that I really like: write once, run anywhere and it is free (I think...). I've been thinking about RIAs too, but I just don't have any substantial feedback about them... Then, I'll need a report tool. Crystal Reports? HTML based Reports? Other? Databases: I'm not sure if I should use SQL-Server Express or PostgreSQL (or other). I'd be happy to hear any comments and advices Thanks

    Read the article

  • Natural problems to solve using closures

    - by m.u.sheikh
    I have read quite a few articles on closures, and, embarassingly enough, I still don't understand this concept! Articles explain how to create a closure with a few examples, but I don't see any point in paying much attention to them, as they largely look contrived examples. I am not saying all of them are contrived, just that the ones I found looked contrived, and I dint see how even after understanding them, I will be able to use them. So in order to understand closures, I am looking at a few real problems, that can be solved very naturally using closures. For instance, a natural way to explain recursion to a person could be to explain the computation of n!. It is very natural to understand a problem like computing the factorial of a number using recursion. Similarly, it is almost a no-brainer to find an element in an unsorted array by reading each element, and comparing with the number in question. Also, at a different level, doing Object-oriented programming also makes sense. So I am trying to find a number of problems that could be solved with or without closures, but using closures makes thinking about them and solving them easier. Also, there are two types to closures, where each call to a closure can create a copy of the environment variables, or reference the same variables. So what sort of problems can be solved more naturally in which of the closure implementations?

    Read the article

  • Printing a field with additional dots in haskell

    - by Frank Kluyt
    I'm writing a function called printField. This function takes an int and a string as arguments and then then prints a field like this "Derp..." with this: printField 7 "Derp". When the field consists of digits the output should be "...3456". The function I wrote looks like this: printField :: Int -> String -> String printField x y = if isDigit y then concat(replicate n ".") ++ y else y ++ concat(replicate n ".") where n = x - length y This obviously isn't working. The error I get from GHC is: Couldn't match type `[Char]' with `Char' Expected type: Char Actual type: String In the first argument of `isDigit', namely `y' In the expression: isDigit y In the expression: if isDigit y then concat (replicate n ".") ++ y else y ++ concat (replicate n ".") I can't get it to work :(. Can anyone help me out? Please keep in mind that I'm new to Haskell and functional programming in general.

    Read the article

  • Which frameworks (and associated languages) support class replacement?

    - by Alix
    Hi, I'm writing my master thesis, which deals with AOP in .NET, among other things, and I mention the lack of support for replacing classes at load time as an important factor in the fact that there are currently no .NET AOP frameworks that perform true dynamic weaving -- not without imposing the requirement that woven classes must extend ContextBoundObject or MarshalByRefObject or expose all their semantics on an interface. You can however do this with the JVM thanks to ClassFileTransformer: You extend ClassFileTransformer. You subscribe to the class load event. On class load, you rewrite the class and replace it. All this is very well, but my project director has asked me, quite in the last minute, to give him a list of frameworks (and associated languages) that do / do not support class replacement. I really have no time to look for this now: I wouldn't feel comfortable just doing a superficial research and potentially putting erroneous information in my thesis. So I ask you, oh almighty programming community, can you help out? Of course, I'm not asking you to research this yourselves. Simply, if you know for sure that a particular framework supports / doesn't support this, leave it as an answer. If you're not sure please don't forget to point it out. Thanks so much!

    Read the article

  • Which languages support class replacement?

    - by Alix
    Hi, I'm writing my master thesis, which deals with AOP in .NET, among other things, and I mention the lack of support for replacing classes at load time as an important factor in the fact that there are currently no .NET AOP frameworks that perform true dynamic weaving -- not without imposing the requirement that woven classes must extend ContextBoundObject or MarshalByRefObject or expose all their semantics on an interface. You can however do this in Java thanks to ClassFileTransformer: You extend ClassFileTransformer. You subscribe to the class load event. On class load, you rewrite the class and replace it. All this is very well, but my project director has asked me, quite in the last minute, to give him a list of languages that do / do not support class replacement. I really have no time to look for this now: I wouldn't feel comfortable just doing a superficial research and potentially putting erroneous information in my thesis. So I ask you, oh almighty programming community, can you help out? Of course, I'm not asking you to research this yourselves. Simply, if you know for sure that a particular language supports / doesn't support this, leave it as an answer. If you're not sure please don't forget to point it out. Thanks so much!

    Read the article

  • Haskell Cons Operator (:)

    - by Carson Myers
    I am really new to Haskell (Actually I saw "Real World Haskell" from O'Reilly and thought "hmm, I think I'll learn functional programming" yesterday) and I am wondering: I can use the construct operator to add an item to the beginning of a list: 1 : [2,3] [1,2,3] I tried making an example data type I found in the book and then playing with it: --in a file data BillingInfo = CreditCard Int String String | CashOnDelivery | Invoice Int deriving (Show) --in ghci $ let order_list = [Invoice 2345] $ order_list [Invoice 2345] $ let order_list = CashOnDelivery : order_list $ order_list [CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, ...- etc... it just repeats forever, is this because it uses lazy evaluation? -- EDIT -- okay, so it is being pounded into my head that let order_list = CashOnDelivery:order_list doesn't add CashOnDelivery to the original order_list and then set the result to order_list, but instead is recursive and creates an infinite list, forever adding CashOnDelivery to the beginning of itself. Of course now I remember that Haskell is a functional language and I can't change the value of the original order_list, so what should I do for a simple "tack this on to the end (or beginning, whatever) of this list?" Make a function which takes a list and BillingInfo as arguments, and then return a list? -- EDIT 2 -- well, based on all the answers I'm getting and the lack of being able to pass an object by reference and mutate variables (such as I'm used to)... I think that I have just asked this question prematurely and that I really need to delve further into the functional paradigm before I can expect to really understand the answers to my questions... I guess what i was looking for was how to write a function or something, taking a list and an item, and returning a list under the same name so the function could be called more than once, without changing the name every time (as if it was actually a program which would add actual orders to an order list, and the user wouldn't have to think of a new name for the list each time, but rather append an item to the same list).

    Read the article

  • computes the number of possible orderings of n objects under the relations < and =

    - by hilal
    Here is the problem : Give a algorithm that takes a positive integer n as input, and computes the number of possible orderings of n objects under the relations < and =. For example, if n = 3 the 13 possible orderings are as follows: a = b = c, a = b < c, a < b = c, a < b < c, a < c < b, a = c < b, b < a = c, b < a < c, b < c < a, b = c < a, c < a = b, c < a < b, c < b < a. Your algorithm should run in time polynomial in n. I'm null to this problem. Can you find any solution to this dynamic-programming problem?

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • Ruby: counters, counting and incrementing

    - by Shyam
    Hi, If you have seen my previous questions, you'd already know I am a big nuby when it comes to Ruby. So, I discovered this website which is intended for C programming, but I thought whatever one can do in C, must be possible in Ruby (and more readable too). The challenge is to print out a bunch of numbers. I discovered this nifty method .upto() and I used a block (and actually understanding its purpose). However, in IRb, I got some unexpected behavior. class MyCounter def run 1.upto(10) { |x| print x.to_s + " " } end end irb(main):033:0> q = MyCounter.new => #<MyCounter:0x5dca0> irb(main):034:0> q.run 1 2 3 4 5 6 7 8 9 10 => 1 I have no idea where the = 1 comes from :S Should I do this otherwise? I am expecting to have this result: 1 2 3 4 5 6 7 8 9 10 Thank you for your answers, comments and feedback!

    Read the article

  • Is it OK to write code after [super dealloc]? (Objective-C)

    - by Richard J. Ross III
    I have a situation in my code, where I cannot clean up my classes objects without first calling [super dealloc]. It is something like this: // Baseclass.m @implmentation Baseclass ... -(void) dealloc { [self _removeAllData]; [aVariableThatBelongsToMe release]; [anotherVariableThatBelongsToMe release]; [super dealloc]; } ... @end This works great. My problem is, when I went to subclass this huge and nasty class (over 2000 lines of gross code), I ran into a problem: when I released my objects before calling [super dealloc] I had zombies running through the code that were activated when I called the [self _removeAllData] method. // Subclass.m @implementation Subclass ... -(void) deallloc { [super dealloc]; [someObjectUsedInTheRemoveAllDataMethod release]; } ... @end This works great, and It didn't require me to refactor any code. My question Is this: Is it safe for me to do this, or should I refactor my code? Or maybe autorelease the objects? I am programming for iPhone if that matters any.

    Read the article

  • Freelance web hosting - what are good LAMP choices?

    - by tkotitan
    I think it's best if I ask this question with an example scenario. Let's say your mom-and-pop local hardware store has never had a website, and they want you, the freelance developer to build them a website. You have all the skills to run a LAMP setup and admin a system, so the difficult question you ask yourself is - where will I host it? As you aren't going to host it out of the machine in your apartment. Let's say you want to be able to customize your own system, install the version of PHP you want, and manage your own database. Perhaps the best kind of hosting is to get a virtual machine so you can customize the system as you see fit. But this essentially a "set it and forget it" site you make, bill by the hour for, and then are done. In other words, the hosting should not be an issue. Given the requirements of hosting a website: Unlimited growth potential needing good amounts of bandwidth to handle visitors Wide range of system and programming options allowing it to be portable Relatively cheap (not necessarily the cheapest) or reasonable scaling cost Reliable hosting with good support Hosted entirely on the host company's hardware Who would you pick to host this website? Yes I am asking for a business/company recommendation. Is there a clear answer for this scenario, or a good source that can reliably give the current answer? I know there are all kinds of schemes out there. I'm just wondering if any one company fills the bill for freelancers and stands out in such a crowded market.

    Read the article

  • Help! I'm a Haskell Newbie

    - by Darknight
    I've only just dipped my toe in the world of Haskell as part of my journey of programming enlightenment (moving on from, procedural to OOP to concurrent to now functional). I've been trying an online Haskell Evaluator. However I'm now stuck on a problem: Create a simple function that gives the total sum of an array of numbers. In a procedural language this for me is easy enough (using recursion) (c#) : private int sum(ArrayList x, int i) { if (!(x.Count < i + 1)) { int t = 0; t = x.Item(i); t = sum(x, i + 1) + t; return t; } } All very fine however my failed attempt at Haskell was thus: let sum x = x+sum in map sum [1..10] this resulted in the following error (from that above mentioned website): Occurs check: cannot construct the infinite type: a = a -> t Please bear in mind I've only used Haskell for the last 30 minutes! I'm not looking simply for an answer but a more explanation of it. Thanks in advanced.

    Read the article

  • Alter a function as a parameter before evaluating it in R?

    - by Shane
    Is there any way, given a function passed as a parameter, to alter its input parameter string before evaluating it? Here's pseudo-code for what I'm hoping to achieve: test.func <- function(a, b) { # here I want to alter the b expression before evaluating it: b(..., val1=a) } Given the function call passed to b, I want to add in a as another parameter without needing to always specify ... in the b call. So the output from this test.func call should be: test.func(a="a", b=paste(1, 2)) "1" "2" "a" Edit: Another way I could see doing something like this would be if I could assign the additional parameter within the scope of the parent function (again, as pseudo-code); in this case a would be within the scope of t1 and hence t2, but not globally assigned: t2 <- function(...) { paste(a=a, ...) } t1 <- function(a, b) { local( { a <<- a; b } ) } t1(a="a", b=t2(1, 2)) This is somewhat akin to currying in that I'm nesting the parameter within the function itself. Edit 2: Just to add one more comment to this: I realize that one related approach could be to use "prototype-based programming" such that things would be inherited (which could be achieved with the proto package). But I was hoping for a easier way to simply alter the input parameters before evaluating in R.

    Read the article

  • Parallelize or vectorize all-against-all operation on a large number of matrices?

    - by reve_etrange
    I have approximately 5,000 matrices with the same number of rows and varying numbers of columns (20 x ~200). Each of these matrices must be compared against every other in a dynamic programming algorithm. In this question, I asked how to perform the comparison quickly and was given an excellent answer involving a 2D convolution. Serially, iteratively applying that method, like so list = who('data_matrix_prefix*') H = cell(numel(list),numel(list)); for i=1:numel(list) for j=1:numel(list) if i ~= j eval([ 'H{i,j} = compare(' char(list(i)) ',' char(list(j)) ');']); end end end is fast for small subsets of the data (e.g. for 9 matrices, 9*9 - 9 = 72 calls are made in ~1 s). However, operating on all the data requires almost 25 million calls. I have also tried using deal() to make a cell array composed entirely of the next element in data, so I could use cellfun() in a single loop: # who(), load() and struct2cell() calls place k data matrices in a 1D cell array called data. nextData = cell(k,1); for i=1:k [nextData{:}] = deal(data{i}); H{:,i} = cellfun(@compare,data,nextData,'UniformOutput',false); end Unfortunately, this is not really any faster, because all the time is in compare(). Both of these code examples seem ill-suited for parallelization. I'm having trouble figuring out how to make my variables sliced. compare() is totally vectorized; it uses matrix multiplication and conv2() exclusively (I am under the impression that all of these operations, including the cellfun(), should be multithreaded in MATLAB?). Does anyone see a (explicitly) parallelized solution or better vectorization of the problem?

    Read the article

  • SYN receives RST,ACK very frequently

    - by user1289508
    Hi Socket Programming experts, I am writing a proxy server on Linux for SQL Database server running on Windows. The proxy is coded using bsd sockets and in C, and it is working just fine. When I use a database client (written in JAVA, and running on a Linux box) to fire queries (with a concurrency of 100 or more) directly to the Database server, not experiencing connection resets. But through my proxy I am experiencing many connection resets. Digging deeper I came to know that connection from 'DB client' to 'Proxy' always succeeds but when the 'Proxy' tries to connect to the DB server the connection fails, due to the SYN packet getting RST,ACK. That was to give some background. The question is : Why does sometimes SYN receives RST,ACK? 'DB client(linux)' to 'Server(windows)' ---- Works fine 'DB client(linux) to 'Proxy(Linux)' to 'Server(windows)' ----- problematic I am aware that this can happen in "connection refused" case but this definitely is not that one. SYN flooding might be another scenario, but that does not explain fine behavior while firing to Server directly. I am suspecting some socket option setting may be required, that the client does before connecting and my proxy does not. Please put some light on this. Any help (links or pointers) is most appreciated. Additional info: Wrote a C client that does concurrent connections, which takes concurrency as an argument. Here are my observations: - At 5000 concurrency and above, some connects failed with 'connection refused'. - Below 2000, it works fine. But the actual problem is observed even at a concurrency of 100 or more. Note: The problem is time dependent sometimes it never comes at all and sometimes it is very frequent and DB client (directly to server) works fine at all times .

    Read the article

  • What is the best Networking implementation for my application?

    - by CaptainPhil
    I am in the planning phases of a project for myself, it is to be a single and multi-player card game. I would like to track statistics for each person such as world rankings etc... My problem is I do not know the best approach for the client - server architecture and programming. My original goal was to program everything in C# as I want to get proficient in that language. My original idea was to have a back-end database and a back end server run on some sort of hosting on the internet, however that seems costly for such a small project that may or may not make any money. I have tried looking into cloud services however I am unfamiliar with the technology, and I am not sure I can make them suit my needs, especially since most like Google's cloud wants you to use their coding architecture from what I understand. Finally my last problem is that I would like an architecture that can be used for different languages so that I can port it from PC to IPhone, Xbox etc... So does anyone have any advice on the best architecture and language to do this in? Am I worrying about architecture and back-end costs to much and should just concentrate on getting the game running any which way?

    Read the article

  • Finding k elements of length-n list that sum to less than t in O(nlogk) time

    - by tresbot
    This is from Programming Pearls ed. 2, Column 2, Problem 8: Given a set of n real numbers, a real number t, and an integer k, how quickly can you determine whether there exists a k-element subset of the set that sums to at most t? One easy solution is to sort and sum the first k elements, which is our best hope to find such a sum. However, in the solutions section Bentley alludes to a solution that takes nlog(k) time, though he gives no hints for how to find it. I've been struggling with this; one thought I had was to go through the list and add all the elements less than t/k (in O(n) time); say there are m1 < k such elements, and they sum to s1 < t. Then we are left needing k - m1 elements, so we can scan through the list again in O(n) time looking for all elements less than (t - s1)/(k - m1). Add in again, to get s2 and m2, then again if m2 < k, look for all elements less than (t - s2)/(k - m2). So: def kSubsetSumUnderT(inList, k, t): outList = [] s = 0 m = 0 while len(outList) < k: toJoin = [i for i in inList where i < (t - s)/(k - m)] if len(toJoin): if len(toJoin) >= k - m: toJoin.sort() if(s0 + sum(toJoin[0:(k - m - 1)]) < t: return True return False outList = outList + toJoin s += sum(toJoin) m += len(toJoin) else: return False My intuition is that this might be the O(nlog(k)) algorithm, but I am having a hard time proving it to myself. Thoughts?

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow. Is there anyway that the server can auto-detect wither to use the development or production configs?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >