Search Results

Search found 5166 results on 207 pages for 'cost benefit'.

Page 24/207 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Linux servers vs Windows IIS sense of usage "free" solutions

    - by Rob
    I wonder what is the sense of using "free" open source solutions for serious webstie applications? Crawled and read many testing of servers performance and there is one conclusion: IIS seems to be the best choice for high load applicatiom. I mean cost effective. Especially this concers to Nginx PLUS and LiteSpeed Users where subscriptions paid for e.g. LoadBalacer and extra support cost a lot in fact. I'm asking then where it's "free" then or "cheap" in this case? Assuming even little higher cost of dedicated servers with Windows still seems like Windows looks cheaper. At it's basic setup Windows 2012 with IIS offer much more than std LAMP, or other NGINX config.... Maybe am I missing sth ? I mean only general case for someone who did not already started his app. I know exactly that the cheapest solution is the one someone is skilled. Has anyone done already such real costs calculation for example scenarios?

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • The Wheel Invention - Beneficial For Learning?

    - by Sarfraz
    Hello, Chris Coyier of css-tricks.com has written a good article titled Regarding Wheel Invention. In a paragraph he says: On the “reinventing” side, you benefit from complete control and learning from the process. And on the very next line he says: On the other side, you benefit from speed, reliability, and familiarity. Also often at odds are time spent and cost. He is right in both statements I think. I really like his first statement. I do actually sometimes re-invent the wheel to learn more and gain complete control over what I am inventing. I wonder why people are so much against that or rather biased. Isn't there the benefit of learning and getting complete control or probably some other benefits too. I would love to see what you have to say about this.

    Read the article

  • Why repeat database constraints in models?

    - by BWelfel
    In a CakePHP application, for unique constraints that are accounted for in the database, what is the benefit of having the same validation checks in the model? I understand the benefit of having JS validation, but I believe this model validation makes an extra trip to the DB. I am 100% sure that certain validations are made in the DB so the model validation would simply be redundant. The only benefit I see is the app recognizing the mistake and adjusting the view for the user accordingly (repopulating the fields and showing error message on the appropriate field; bettering the ux) but this could be achieved if there was a constraint naming convention and so the app could understand what the problem was with the save (existing method to do this now?)

    Read the article

  • Not Playing Nice Together

    - by David Douglass
    One of the things I’ve noticed is that two industry trends are not playing nice together, those trends being multi-core CPUs and massive hard drives.  It’s not a problem if you keep your cores busy with compute intensive work, but for software developers the beauty of multi-core CPUs (along with gobs of RAM and a 64 bit OS) is virtualization.  But when you have only one hard drive (who needs another when it holds 2 TB of data?) you wind up with a serious hard drive bottleneck.  A solid state drive would definitely help, and might even be a complete solution, but the cost is ridiculous.  Two TB of solid state storage will set you back around $7,000!  A spinning 2 TB drive is only $150. I see a couple of solutions for this.  One is the mainframe concept of near and far storage: put the stuff that will be heavily access on a solid state drive and the rest on a spinning drive.  Another solution is multiple spinning drives.  Instead of a single 2 TB drive, get four 500 GB drives.  In total, the four 500 GB drives will cost about $100 more than the single 2 TB drive.  You’ll need to be smart about what drive you place things on so that the load is spread evenly.  Another option, for better performance, would be four 10,000 RPM 300 GB drives, but that would cost about $800 more than the singe 2 TB drive and would deliver only 1.2 TB of space. All pricing based on Microcenter as of March 14, 2010.

    Read the article

  • Distributed Development Tools -- (Version control and Project Management)

    - by Macy Abbey
    Hello, I've recently become responsible for choosing which source control and project management software to use for a company that employs me. Currently it uses Jira (project management) and Subversion (version control). I know there are many other options out there -- the ones I know about are all in this article http://mashable.com/2010/07/14/distributed-developer-teams/ . I'm leaning towards recommending they just stay with what they have as it seems workable and any change would have to be worth the cost of switching to say github/basecamp or some other solution. Some details on the team: It's a distributed development shop. Meetings of the whole team in one room are rare. It's currently a very small development team (three developers). The project management software is used by developers and a product manager or two. What are you experiences with version control and project management web applications? Are there any you would recommend and you think are worth the switching cost of time to learn new services / implementing the change? Edit: After educating myself further on the options it appears DVCS offer powerful benefits that may be worth investing in now as opposed to later in the company's lifetime when the switching cost is higher: I'm a Subversion geek, why I should consider or not consider Mercurial or Git or any other DVCS?

    Read the article

  • Oracle VM Blade Cluster Reference Configuration

    - by Ferhat Hatay
    Today we are happy to announce the availability of the Oracle VM blade cluster reference configuration for Sun Blade 6000 modular systems.  The new Oracle VM blade cluster reference configuration can help reduce the time to deploy virtual infrastructure by up to 98 percent when compared to multi-vendor configurations. Oracle's virtualization strategy is to simplify the deployment, management, and support of the enterprise stack from application to disk. The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components. It enables quick and easy deployment of the virtualized infrastructure using components that have been tested together and are all supported together by one vendor — Oracle. All components listed in the reference configuration have been tested together by Oracle, reducing the need for customer testing and the time-consuming and complex effort of designing and deploying a stable configuration. Benefitting from pre-installed Oracle VM Server for x86 software on Oracle’s highly scalable and reliable Sun Blade servers with built-in networking and Oracle’s Sun ZFS Storage Appliance product line, the configuration provides high availability via the blade cluster as well as a documented best practice guide that helps reduce deployment time and cost for customers implementing highly virtualized applications or private cloud Infrastructure as a Service (IaaS) architectures. To further support easier, faster and lower-cost deployments, Oracle Linux, Oracle Solaris and Oracle VM are available for pre-install on select Sun x86 systems, and Oracle VM Templates are available for download for Oracle Applications, Oracle Fusion Middleware, Oracle Database, Oracle Real Application Clusters, and many other Oracle products. Key benefits of the Oracle VM blade cluster reference configuration include: Faster time to value – Begin deploying applications immediately because the optimized software stack is pre-configured for best practices and is ready-to-run on the recommended hardware platforms. Reduced deployment cost and risk – The entire hardware and software stack has been tested and is supported together by Oracle. Elastic scalability – As capacity needs grow, the system can be easily scaled in multiple dimensions with the ability to add compute, storage, and networking resources independently. For more information, see: Oracle white paper: Accelerating deployment of virtualized infrastructures with the Oracle VM blade cluster reference configuration Oracle technical white paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • What are the options for hosting a small Plone site?

    - by Tina Russell
    I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO) – Your pre-build & tested middleware platform

    - by JuergenKress
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected] SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: AFPO,Accenture,middleware platform,oracle middleware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Partner Webcast – Oracle Exadata X3 Database In-Memory Machine - Next-Generation Technologies Update - 20 Dec 2012

    - by Thanos
    Oracle’s next-generation database machine, Oracle Exadata X3, combines massive memory and low-cost disks to deliver even faster performance and greater storage capabilities at the lowest cost, making it the ideal database platform for the varied and unpredictable workloads of cloud computing. Oracle Exadata is available in multiple configurations including a low-cost eighth-rack configuration, so you can start small and grow at your own pace. We have also introduced new migration services designed to streamline implementation thereby saving you time and money. If your IT department is expected to deliver business value—or even drive business growth—then you’ll want to join us for a live Webcast discussing how the new Oracle Exadata X3 can help you transform data management.  Agenda: Oracle Exadata Evolution Oracle Exadata X3 Database In-Memory Machine Hardware Update Software Update Exadata Unique Next Generation Technologies Getting on board Oracle Exadata Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday, December 20th, 10am CET (9am GMT) Duration: 1 hour Register Now! For any questions please contact us at [email protected] Visit our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix.

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Distributed Development Tools -- (Version control and Project Management)

    - by Macy Abbey
    I've recently become responsible for choosing which source control and project management software to use for a company that employs me. Currently it uses Jira (project management) and Subversion (version control). I know there are many other options out there -- the ones I know about are all in this article http://mashable.com/2010/07/14/distributed-developer-teams/ . I'm leaning towards recommending they just stay with what they have as it seems workable and any change would have to be worth the cost of switching to say github/basecamp or some other solution. Some details on the team: It's a distributed development shop. Meetings of the whole team in one room are rare. It's currently a very small development team (three developers). The project management software is used by developers and a product manager or two. What are you experiences with version control and project management web applications? Are there any you would recommend and you think are worth the switching cost of time to learn new services / implementing the change? Edit: After educating myself further on the options it appears DVCS offer powerful benefits that may be worth investing in now as opposed to later in the company's lifetime when the switching cost is higher: I'm a Subversion geek, why I should consider or not consider Mercurial or Git or any other DVCS?

    Read the article

  • Advice on developing a social network [on hold]

    - by Siraj Mansour
    I am doing research on assembling a team, using the right tools, and the cost to develop a highly responsive social network that is capable of dealing with a lot of users. Similar to the Facebook concept but using the basics package for now. Profile, friends, posts, updates, media upload/download, streaming, chat and Inbox messaging are all in the package. We certainly do not expect it to be as popular as Facebook or handle the same number of users and requests, but in its own game it has to be a monster, and expandable for later on. Neglecting the hosting, and servers part, i am looking for technical advise and opinions, on what kind of team i need ? how many developers ? their expertise ? What are the right tools ? languages ? frameworks ? environments ? Any random ideas about the infrastructure ? Quick thoughts on the development process ? Please use references, if you have any to support your ideas. Development cost mere estimation ? NEGLECTING THE COST OF SERVERS I know my question is too broad but my knowledge is very limited and i need detailed help, for any help you can offer i thank you in advance.

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO)

    - by Lionel Dubreuil
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure  Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected]

    Read the article

  • Platinum Services – The Highest Level of Service in the Industry

    - by cwarticki
    Oracle Platinum Services provides remote fault monitoring with faster response times and patch deployment services to qualified Oracle Premier Support customers – at no additional cost. We know that disruptions in IT systems availability can seriously impact business performance. That’s why we engineer our hardware and software to work together. Oracle engineered systems are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. And now, customers who choose the extreme performance of Oracle engineered systems have the power to access the added support they need – Oracle Platinum Services – to further optimize for high availability at no additional cost.  In addition to receiving the complete support essentials with Oracle Premier Support, qualifying Oracle Platinum Services customers also receive: •     24/7 Oracle remote fault monitoring •    Industry-leading response and restore times o   5-Minute Fault Notification o   15-Minute Restoration or Escalation to Development o   30-Minute Joint Debugging with Development •    Update and patch deployment Visit us online to learn more about how to get Oracle Platinum Services

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • International Pricing of Software [closed]

    - by arachnode.net
    I operate a small company that charges $99 for a piece of software. I'd like to know what would be a fair price for non-US customers. Today I sold a license to a party in South Africa. He told me he had been watching the project for two years while business justification could be made for the purchase as SA's currency is nine times weaker than the US dollar. I found this resource detailing how much a Big Mac costs in various countries: http://howmuchatyourplace.com/how_much_does/Big%20Mac_cost.php I realize that the cost of producing a Big Mac varies from locale to locale as does the demand for one. I am aware that many software companies charge prices in local currencies that equate to the price in US dollars. I am aware that my costs remain fixed, and I obviously I cannot discount the rate at which my time costs me. I'm OK with earning less per sale as I would rather get my software onto the desktops of those that need it rather than having them try to write it themselves. Support is light and I can usually point a user to an existing blog or forum post. Being a resident of Hawaii, I am aware that certain goods and services cost more here. Power is up to six times as much per KWH as it is in, say, Seattle, and wages are approximately 60% of what they are for my profession (programmer). I'd like to offer my software at a price that would be fair for everyone around the globe. If a currency is 2 foreign units to 1 US dollar, and goods and services cost 50% more and pay for an equivalent job is 50% of what it is here, should I charge, say, $50 instead of $99? Is there a resource which would allow me to input a price in US dollars and adjust for a list of international locations?

    Read the article

  • PPT Leveraging Azure for Performance Testing

    - by Tarun Arora
    I have recently presented a session on “How you can leverage Azure for Performance Testing” your application.  It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barrier in Load Testing & Performance Testing adoption is the high infrastructure and administration cost that comes with this phase of testing. In the session I tried to demonstrate how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. You can view the session presentation here, http://www.slideshare.net/aroratarun/leveraging-azure-for-performance-testing  I’ll be adding a video on this subject shortly… If you have any feedback or further suggestions to add to the goodness of this solution please get in touch.

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Add & Show data in Java ArrayList [closed]

    - by Kaidul Islam Sazal
    I have a class inside a main class : static class Graph{ static int u, v, cost; } I have instantiated an arraylist of the class: static List<Graph> g = new ArrayList<Graph>(); And I insert several values into the arraylist like this: Scanner input = new Scanner(System.in); for (int i = 0; i < edge_no; i++) { Graph e = new Graph(); e.u = input.nextInt(); e.v = input.nextInt(); e.cost = input.nextInt(); g.add(e); } And I print it like this: for (int i = 0; i < edge_no; i++) { System.out.println(g.get(i).u + " " + g.get(i).v + " " + g.get(i).cost); } But the problem is that, when I print it, only the last value is shown all the time.It seems that, all the previous values are over-written with it. Input : 1 2 5 1 3 8 2 3 9 Output: 2 3 9 2 3 9 2 3 9 Expected output is just like the input.But I can't fix the problem as I am novice in java.

    Read the article

  • How to educate business managers on the complexity of adding new features? [duplicate]

    - by Derrick Miller
    This question already has an answer here: How to educate business managers on the complexity of adding new features? [duplicate] 3 answers We maintain a web application for a client who demands that new features be added at a breakneck pace. We've done our best to keep up with their demands, and as a result the code base has grown exponentially. There are now so many modules, subsystems, controllers, class libraries, unit tests, APIs, etc. that it's starting to take more time to work through all of the complexity each time we add a new feature. We've also had to pull additional people in on the project to take over things like QA and staging, so the lead developers can focus on developing. Unfortunately, the client is becoming angry that the cost for each new feature is going up. They seem to expect that we can add new features ad infinitum and the cost of each feature will remain linear. I have repeatedly tried to explain to them that it doesn't work that way - that the code base expands in a fractal manner as all these features are added. I've explained that the best way to keep the cost down is to be judicious about which new features are really needed. But, they either don't understand, or they think I'm bullshitting them. They just sort of roll their eyes and get angry. They're all completely non-technical, and have no idea what does into writing software. Is there a way that I can explain this using business language, that might help them understand better? Are there any visualizations out there, that illustrate the growth of a code base over time? Any other suggestions on dealing with this client?

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [closed]

    - by DhruvPathak
    Possible Duplicate: How to find web hosting that meets my requirements? We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >