Search Results

Search found 1375 results on 55 pages for 'asymptotic complexity'.

Page 10/55 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • ColdFusion MVC frameworks & RESTful Service mismatch?

    - by Henry
    Most CF MVC Frameworks use the front controller pattern. Usually Search Engine Safe (SES) plugin together with URL Rewrite are used to construct friendly URLs. However, when it comes to implementing RESTful services, using a MVC framework seems like a layer of complexity added on top of another layer of complexity. How should one tame this beast? Any nice and clean approach of supporting RESTful services with ColdFusion? Any MVC framework out there that can expose RESTful services easily? Thanks

    Read the article

  • Proactively using 'lines of code' (LOC) metric in your software-development process?

    - by manuel aldana
    hi there, I find the LOC (lines of code) metric a simple but nice metric to get an overview of software codebase complexity (see also blog-entry 'implications of lines-of-code'). I wondered how many of you out there are using this metric as a centric part for retrospective (for removing unused functionality or dead code). I think creating awareness that more lines-of-code mean more complexity in maintenance and extension is valuable.

    Read the article

  • Find the element with the most "neighbors" in a sequence

    - by Bao An
    Assume that we have a sequence S with n elements <x1,x2,...,xn>. A pair of elements xi,xj are considered neighbors if |xi-xj|<d, with d a given distance between two neighbors. So how can find out the element that has most neighbors in the sequence? (A simply way is sorting the sequence and then calculating number of each element but it's time complexity is quite large): O(nlogn) May you please help me find a better way to reduce time complexity?

    Read the article

  • stl priority queue based on lower value first

    - by russell
    I have a problem with stl priority queue.I want to have the priority queue in the increasing order,which is decreasing by default.Is there any way to do this in priority queue. And what is the complexity of building stl priority queue.If i use quick sort in an array which takes O(nlgn) is its complexity is similar to using priority queue??? Plz someone ans.Advanced thanx.

    Read the article

  • sort array of size n

    - by Grv
    if an array of size n has only 3 values 0 ,1 and 2 (repeated any number of times) what is the best way to sort them. best indicates complexity. consider space and time complexity both

    Read the article

  • In an AVL tree, at what condition the balancing is to be done? proper code in c languge

    - by bachchan
    Binary search follows Divide and Conquer method where as linear Search doesn't follw.The time complexity of Binary Search in O(log n) but incase of linear search the time complexity is O(n). Thats way Binary search is having bettr prior than linear search. But it is true when the list of items is large incase of smaller list linear is best(i.e.- it is only when the Best Case concern)

    Read the article

  • Does programming knowledge have a half-life?

    - by Gary Rowe
    In answering this question, I asserted that programming knowledge has a half-life of about 18 months. In physics, we have radioactive decay which is the process by which a radioactive element transforms into something less energetic. The half-life is the measure of how long it takes for this process to result in only half of the material to remain. A parallel concept might be that over time our programming knowledge ceases to be the current idiom and eventually becomes irrelevant. Noting that a half-life is asymptotic (so some knowledge will always be relevant), what are your thoughts on this? Is 18 months a good estimate? Is it even the case? Does it apply to design patterns, but over a longer period? What are the inherent advantages/disadvantages of this half-life? Update Just found this question which covers the material fairly well: "Half of everything you know will be obsolete in 18-24 months" = ( True, or False? )

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • What is wrong with this solution? (Perm-Missing-Elem codility test)

    - by user2956907
    I have started playing with codility and came across this problem: A zero-indexed array A consisting of N different integers is given. The array contains integers in the range [1..(N + 1)], which means that exactly one element is missing. Your goal is to find that missing element. Write a function: int solution(int A[], int N); that, given a zero-indexed array A, returns the value of the missing element. For example, given array A such that: A[0] = 2 A[1] = 3 A[2] = 1 A[3] = 5 the function should return 4, as it is the missing element. Assume that: N is an integer within the range [0..100,000]; the elements of A are all distinct; each element of array A is an integer within the range [1..(N + 1)]. Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(1), beyond input storage (not counting the storage required for input arguments). I have submitted the following solution (in PHP): function solution($A) { $nr = count($A); $totalSum = (($nr+1)*($nr+2))/2; $arrSum = array_sum($A); return ($totalSum-$arrSum); } which gave me a score of 66 of 100, because it was failing the test involving large arrays: "large_range range sequence, length = ~100,000" with the result: RUNTIME ERROR tested program terminated unexpectedly stdout: Invalid result type, int expected. I tested locally with an array of 100.000 elements, and it worked without any problems. So, what seems to be the problem with my code and what kind of test cases did codility use to return "Invalid result type, int expected"?

    Read the article

  • How to estimate the contribution of an individual to a software project?

    - by Amit Kumar
    I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things. The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis. My idea of an ideal tool for this purpose would: measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution) measure the decomposibility/flexibility of the software (more decomposable is better) how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library. be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code". It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately. If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has a different goal, but difficult does not mean it is not useful.

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • error in finding out the lexems and no of lines of a text file in C

    - by mekasperasky
    #include<stdio.h> #include<ctype.h> #include<string.h> int main() { int i=0,j,k,lines_count[2]={1,1},operand_count[2]={0},operator_count[2]={0},uoperator_count[2]={0},control_count[2]={0,0},cl[13]={0},variable_dec[2]={0,0},l,p[2]={0},ct,variable_used[2]={0,0},constant_count[2],s[2]={0},t[2]={0}; char a,b[100],c[100]; char d[100]={0}; j=30; FILE *fp1[2],*fp2; fp1[0]=fopen("program1.txt","r"); fp1[1]=fopen("program2.txt","r"); //the source file is opened in read only mode which will passed through the lexer fp2=fopen("ccv1ouput.txt","wb"); //now lets remove all the white spaces and store the rest of the words in a file if(fp1[0]==NULL) { perror("failed to open program1.txt"); //return EXIT_FAILURE; } if(fp1[1]==NULL) { perror("failed to open program2.txt"); //return EXIT_FAILURE; } i=0; k=0; ct=0; while(ct!=2) { while(!feof(fp1[ct])) { a=fgetc(fp1[ct]); if(a!=' '&&a!='\n') { if (!isalpha(a) && !isdigit(a)) { switch(a) { case '+':{ i=0; cl[0]=1; operator_count[ct]=operator_count[ct]+1;break;} case '-':{ cl[1]=1; operator_count[ct]=operator_count[ct]+1;i=0;break;} case '*':{ cl[2]=1; operator_count[ct]=operator_count[ct]+1;i=0;break;} case '/':{ cl[3]=1; operator_count[ct]=operator_count[ct]+1;i=0;break;} case '=':{a=fgetc(fp1[ct]); if (a=='='){cl[4]=1; operator_count[ct]=operator_count[ct]+1; operand_count[ct]=operand_count[ct]+1;} else { cl[5]=1; operator_count[ct]=operator_count[ct]+1; operand_count[ct]=operand_count[ct]+1; ungetc(1,fp1[ct]); } break;} case '%':{ cl[6]=1; operator_count[ct]=operator_count[ct]+1;i=0;break;} case '<':{ a=fgetc(fp1[ct]); if (a=='=') {cl[7]=1; operator_count[ct]=operator_count[ct]+1;} else { cl[8]=1; operator_count[ct]=operator_count[ct]+1; ungetc(1,fp1[ct]); } break; } case '>':{ ; a=fgetc(fp1[ct]); if (a=='='){cl[9]=1; operator_count[ct]=operator_count[ct]+1;} else { cl[10]=1; operator_count[ct]=operator_count[ct]+1; ungetc(1,fp1[ct]); } break;} case '&':{ cl[11]=1; a=fgetc(fp1[ct]); operator_count[ct]=operator_count[ct]+1; operand_count[ct]=operand_count[ct]+1; variable_used[ct]=variable_used[ct]-1; break; } case '|':{ cl[12]=1; a=fgetc(fp1[ct]); operator_count[ct]=operator_count[ct]+1; operand_count[ct]=operand_count[ct]+1; variable_used[ct]=variable_used[ct]-1; break; } case '#':{ while(a!='\n') { a=fgetc(fp1[ct]); } } } } else { d[i]=a; i=i+1; k=k+1; } } else { //printf("%s \n",d); if((strcmp(d,"if")==0)){ memset ( d, 0, 100 ); i=0; control_count[ct]=control_count[ct]+1; } else if(strcmp(d,"then")==0){ i=0;memset ( d, 0, 100 );control_count[ct]=control_count[ct]+1;} else if(strcmp(d,"else")==0){ i=0;memset ( d, 0, 100 );control_count[ct]=control_count[ct]+1;} else if(strcmp(d,"while")==0){ i=0;memset ( d, 0, 100 );control_count[ct]=control_count[ct]+1;} else if(strcmp(d,"int")==0){ while(a != '\n') { a=fgetc(fp1[ct]); if (isalpha(a) ) variable_dec[ct]=variable_dec[ct]+1; } memset ( d, 0, 100 ); lines_count[ct]=lines_count[ct]+1; } else if(strcmp(d,"char")==0){while(a != '\n') { a=fgetc(fp1[ct]); if (isalpha(a) ) variable_dec[ct]=variable_dec[ct]+1; } memset ( d, 0, 100 ); lines_count[ct]=lines_count[ct]+1; } else if(strcmp(d,"float")==0){while(a != '\n') { a=fgetc(fp1[ct]); if (isalpha(a) ) variable_dec[ct]=variable_dec[ct]+1; } memset ( d, 0, 100 ); lines_count[ct]=lines_count[ct]+1; } else if(strcmp(d,"printf")==0){while(a!='\n') a=fgetc(fp1[ct]); memset(d,0,100); } else if(strcmp(d,"scanf")==0){while(a!='\n') a=fgetc(fp1[ct]); memset(d,0,100);} else if (isdigit(d[i-1])) { memset ( d, 0, 100 ); i=0; constant_count[ct]=constant_count[ct]+1; operand_count[ct]=operand_count[ct]+1; } else if (isalpha(d[i-1]) && strcmp(d,"int")!=0 && strcmp(d,"char")!=0 && strcmp(d,"float")!=0 && (strcmp(d,"if")!=0) && strcmp(d,"then")!=0 && strcmp(d,"else")!=0 && strcmp(d,"while")!=0 && strcmp(d,"printf")!=0 && strcmp(d,"scanf")!=0) { memset ( d, 0, 100 ); i=0; operand_count[ct]=operand_count[ct]+1; } else if(a=='\n') { lines_count[ct]=lines_count[ct]+1; memset ( d, 0, 100 ); } } } fclose(fp1[ct]); operand_count[ct]=operand_count[ct]-5; variable_used[0]=operand_count[0]-constant_count[0]; variable_used[1]=operand_count[1]-constant_count[1]; for(j=0;j<12;j++) uoperator_count[ct]=uoperator_count[ct]+cl[j]; fprintf(fp2,"\n statistics of program %d",ct+1); fprintf(fp2,"\n the no of lines ---> %d",lines_count[ct]); fprintf(fp2,"\n the no of operands --->%d",operand_count[ct]); fprintf(fp2,"\n the no of operator --->%d",operator_count[ct]); fprintf(fp2,"\n the no of control statments --->%d",control_count[ct]); fprintf(fp2,"\n the no of unique operators --->%d",uoperator_count[ct]); fprintf(fp2,"\n the no of variables declared--->%d",variable_dec[ct]); fprintf(fp2,"\n the no of variables used--->%d",variable_used[ct]); fprintf(fp2,"\n ---------------------------------"); fprintf(fp2,"\n \t \t \t"); ct=ct+1; } t[0]=lines_count[0]+control_count[0]+uoperator_count[0]; t[1]=lines_count[1]+control_count[1]+uoperator_count[1]; s[0]=operator_count[0]+operand_count[0]+variable_dec[0]+variable_used[0]; s[1]=operator_count[1]+operand_count[1]+variable_dec[1]+variable_used[1]; fprintf(fp2,"\n the time complexity of program 1 is %d",t[0]); fprintf(fp2,"\n the time complexity of program 2 is %d",t[1]); fprintf(fp2,"\n the space complexity of program 1 is %d",s[0]); fprintf(fp2,"\n the space complexity of program 2 is %d",s[1]); if((t[0]>t[1]) && (s[0] >s[1])) fprintf(fp2,"\n the efficiency of program 2 is greater than program 1"); else if(t[0]<t[1] && s[0] < s[1]) fprintf(fp2,"\n the efficiency of program 1 is greater than program 2 " ); else if (t[0]+s[0] > t[1]+s[1]) fprintf(fp2,"\n the efficiency of program 1 is greater than program 2"); else if (t[0]+s[0] < t[1]+s[1]) fprintf(fp2,"\n the efficiency of program 2 is greater than program 1"); else if (t[0]+s[0] == t[1]+s[1]) fprintf(fp2,"\n the efficiency of program 1 is equal to that of program 2"); fclose(fp2); return 0; } this code basically compares two c codes and finds out the no. of variables declared , used , no. of control statements , no. of lines and no. of unique operators , and operands , so as to find out the time complexity and space complexity of of the two programs given in the text file program1.txt and program2.txt ... Lets say program1.txt is this #include<stdio.h> #include<math.h> int main () { FILE *fp; fp=fopen("output.txt","w"); long double t,y=0,x=0,e=5,f=1,w=1; for (t=0;t<10;t=t+0.01) { //if (isnan(y) || isinf(y)) //break; fprintf(fp,"%ld\t%ld\n",y,x); y = y + ((e*(1 - (x*x))*y) - x + f*cos(w*0.1))*0.1; x = x + y*0.1; } fclose(fp); return (0); } i havent indented it as its just a text file . But my output is totally faulty . Its not able to find the any of the ouput that i need . Where is the bug in this ? I am not able to figure out as the algorithm looks fine .

    Read the article

  • In Scrum, should a team remove points from (defect) stories that don't result in a code change?

    - by CanIgtAW00tW00t
    My work uses a Scrum-like process to manage projects. I say Scrum-like, because we call it Scrum, but our project managers exclude aspects of Scrum that are inconvenient (most notably customer interaction). One of the stories in our current sprint was to correct a defect. After spending almost an entire day working on the issue, I determined the issue was the result of a permissions issue, so I didn't end up modifying any code. Our Scrum master / project manager decided that no code change equals zero points. I know that Scrum points are supposed to measure size / complexity and not time, but our Scrum master invests a lot of time in preparing graphs and statistical information from past sprints (average velocity, average points completed, etc.) I've always been of the opinion that for statistics to be meaningful in any way, the data must be as accurate as possible. All of our data is fuzzy to begin with, because, from time to time, we're encouraged by the Scrum master to "adjust" our size / complexity estimates, both increasing and decreasing them. I'd like to hear some other developers / Scrum team members thoughts on the merits of statistics based on past sprints, and also whether they think it's appropriate to "adjust" size / complexity estimates in the middle of a sprint, or the remove all points from a story all together for situations similar to what I've just described.

    Read the article

  • Efficiency of purely functional programming

    - by Sid
    Does anyone know what is the worst possible asymptotic slowdown that can happen when programming purely functionally as opposed to imperatively (i.e. allowing side-effects)? Clarification from comment by itowlson: is there any problem for which the best known non-destructive algorithm is asymptotically worse than the best known destructive algorithm, and if so by how much?

    Read the article

  • Will First() perform the OrderBy()?

    - by Martin
    Is there any difference in (asymptotic) performance between Orders.OrderBy(order => order.Date).First() and Orders.Where(order => order.Date == Orders.Max(x => x.Date)); i.e. will First() perform the OrderBy()? I'm guessing no. MSDN says enumerating the collection via foreach och GetEnumerator does but the phrasing does not exclude other extensions.

    Read the article

  • Users can't change password trough OWA for Exchange 2010

    - by Rémy Roux
    Here's our problem, users who want to change their password trough OWA get this error "The password you entered doesn't meet the minimum security requirements.", even if users are respecting the minimum security requirements. With these settings, we have the error: Enforced password history 1 passwords remembered Maximum password age 185 days Minimum password age 1 day Minimum password length 7 characters Password must meet complexity requirements enabled With these test settings, we don't have an error: Enforced password history not defined Maximum password age not defined Minimum password age not defined Minimum password length not defined Password must meet complexity requirements not defined People can change their password but there is no more security! Just changing one parameter of the GPO for example "Enforced password history", brings back this error. Here's our server configuration : Windows Server 2008 R2 Exchange Server 2010 Version: 14.00.0722.000 If anybody has a clue it would very helpful !

    Read the article

  • News about Oracle Documaker Enterprise Edition

    - by Susanne Hale
    Updates come from the Documaker front on two counts: Oracle Documaker Awarded XCelent Award for Best Functionality Celent has published a NEW report entitled Document Automation Solution Vendors for Insurers 2011. In the evaluation, Oracle received the XCelent award for Functionality, which recognizes solutions as the leader in this category of the evaluation. According to Celent, “Insurers need to address issues related to the creation and handling of all sorts of documents. Key issues in document creation are complexity and volume. Today, most document automation vendors provide an array of features to cope with the complexity and volume of documents insurers need to generate.” The report ranks ten solution providers on Technology, Functionality, Market Penetration, and Services. Each profile provides detailed information about the vendor and its document automation system, the professional services and support staff it offers, product features, insurance customers and reference feedback, its technology, implementation process, and pricing.  A summary of the report is available at Celent’s web site. Documaker User Group in Wisconsin Holds First Meeting Oracle Documaker users in Wisconsin made the first Documaker User Group meeting a great success, with representation from eight companies. On April 19, over 25 attendees got together to share information, best practices, experiences and concepts related to Documaker and enterprise document automation; they were also able to share feedback with Documaker product management. One insurer shared how they publish and deliver documents to both internal and external customers as quickly and cost effectively as possible, since providing point of sale documents to the sales force in real time is crucial to obtaining and maintaining the book of business. They outlined best practices that ensure consistent development and testing strategies processes are in place to maximize performance and reliability. And, they gave an overview of the supporting applications they developed to monitor and improve performance as well as monitor and track each transaction. Wisconsin User Group meeting photos are posted on the Oracle Insurance Facebook page http://www.facebook.com/OracleInsurance. The Wisconsin User Group will meet again on October 26. If you and other Documaker customers in your area are interested in setting up a user group in your area, please contact Susanne Hale ([email protected]), (703) 927-0863.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Gain Total Control of Systems running Oracle Linux

    - by Anand Akela
    Oracle Linux is the best Linux for enterprise computing needs and Oracle Enterprise Manager enables enterprises to gain total control over systems running Oracle Linux. Linux Management functionality is available as part of Oracle Enterprise Manager 12c and is available to Oracle Linux Basic and Premier Support customers at no cost. The solution provides an integrated and cost-effective solution for complete Linux systems lifecycle management and delivers comprehensive provisioning, patching, monitoring, and administration capabilities via a single, web-based user interface thus significantly reducing the complexity and cost associated with managing Linux operating system environments. Many enterprises are transforming their IT infrastructure from multiple independent datacenters to an Infrastructure-as-a-Service (IaaS) model, in which shared pools of compute and storage are made available to end-users on a self-service basis. While providing significant improvements when implemented properly, this strategy introduces change and complexity at a time when datacenters are already understaffed and overburdened. To aid in this transformation, IT managers need the proper tools to help them provide the array of IT capabilities required throughout the organization without stretching their staff and budget to the limit. Oracle Enterprise Manager 12c offers  the advanced capabilities to enable IT departments and end-users to take advantage of many benefits and cost savings of IaaS. Oracle Enterprise Manager Ops Center 12c addresses this challenge with a converged approach that integrates systems management across the infrastructure stack, helping organizations to streamline operations, increase productivity, and reduce system downtime.  You can see the Linux management functionality in action by watching the latest integrated Linux management demo . Stay Connected with Oracle Enterprise Manager: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Is it possible for beginner to learn and develop an application in rails in 4 months?

    - by Parth
    I want to develop a web application or a website using rails. My current knowledge includes 1. HTML 2. CSS 3. C 4. Java And I am currently going through 5th chapter of the well grounded rubyist book by David A. Thomas. I came to know that learning ruby is beneficial for good knowledge of rails. So currently I am going through the basics of ruby. And learning rails in parallel. I want to know if in this scenario is it practically feasible to understand rails and develop an application/website in it within the time frame of 4 months. I need to develop an application which have atleast 3 complexity (complex functionality). Any ideas of good application for rails beginners is welcomed. But the application should be large or if it is small than it should have some complexity. Time is a constraint for me. I would have to develop application for college work but rails technology is my choice as I want to learn it.

    Read the article

  • Manager keeps changing requirement specification after every demo.

    - by Jungle Hunter
    Background of my working environment My manager has no background or understanding of computers or software whatsoever. It is highly likely he hasn't seen code in any form (not even from a physical distance of 10 feet or less) in his life. There is no one who understands the complexity of what I am asked to implement. To the point that if I semi-hardcode no one would know. On Joel's test we score an unbelievable score 0. The problems The manager and at times other "senior" keep changing the requirement specification. Changes which, if good engineering be done and not patchy "fixes", require change in the underlying design. There is absolutely no one who looks at code (probably because no one knows how to, or even if it should be done) which means no one will ever be able to: Appreciate the complexity of the problem or the elegance of the solution. Suggest improvement to the approach. Appreciate the quality of the code. Point out where the code can be improved. A lot of jargon is used which makes sense grammatically but fails to make any sense any other way. Doesn't feel, behave or work like a software company. The question What should be done? Especially regarding there being no one who would point out improvements in my code. Update To answer HLGEM's (and possibly others) question about what I've done to try and fix it. I offered to set up Redmine and introduce source control to everyone. I said I would recommend distributed (git or mercurial) but will also talk about centralized ones and let the team decide. Response was that things are being done and will be done within weeks. Haven't seen that nor am I aware if other parts of the company use it.

    Read the article

  • Partner Webcast - Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise

    - by Thanos Terentes Printzios
    The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support three new integration challenges:  » Cloud Integration - integrate with the cloud, rapidly integrate a growing list of cloud applications with existing applications » Mobile Integration - the urgency to mobile-enable existing applications » IoT Integration - begin development on the latest trend of connecting Internet of Things (IoT) devices to your existing infrastructure. Oracle SOA Suite 12c, the latest version of the industry’s most complete and unified application integration and SOA solution, aims to simplify, accelerate and optimize integrations. Oracle SOA Suite 12c and its associated products, Oracle Managed File Transfer, Oracle Cloud and Application Adapters, B2B and healthcare integration, offer the industry’s most highly integrated platform for solving the increased integration challenges. Oracle SOA Suite 12c is a complete, integrated and best-of-breed platform. It enables next generation integration capabilities through: · A unified toolset for the development of services and composite applications.· A standards-based platform that is service enabled and easily consumable by modern web applications, allowing enterprises to quickly and easily adapt to changes in their business and IT environments.· Greater visibility, controls and analytics to govern how services and processes are deployed, reused and changed across their entire lifecycle. Join us to find out more about the new features of Oracle SOA Suite 12c and how it enables you to reduce time to market for new project integration and to reduce integration cost and complexity. Oracle SOA Suite is the ability to simplify by integrating the disparate requirements of cloud, mobile, and IoT devices with existing on-premise applications. Agenda: Oracle SOA Suite 12c new Features Cloud Integration Mobile Enablement Internet of Things (IoT) Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Heba Fouad – FMW Specialist, Technology Adoption, ECEMEA Partner Business Development Date: Thursday, August 28th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Here For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >