Search Results

Search found 1337 results on 54 pages for 'capacity estimation'.

Page 7/54 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Sizing Switches for Storage and Production

    - by Untalented
    Couple questions. Should you always completely separate the storage network switches from production switches or are VLANs fine to segment this traffic? Is there a golden rule here? How do you properly size a switch for your environment based on the specifications the manufacturer provide (Throughput, Forwarding Throughput, Stacking Throughput, Max Mac)? If you have two switch options and one has a maximum Mac address of 8,000 vs. another with 16,0000. What does this really mean to me? How do make sure one vs. another is sized properly for me? Besides VLAN and Jumbo Frame support, is there any other "Must" haves for a virtual environments production or storage networks? There is a wealth of knowledge on sizing SANs and such, but this seems equally important and it's quite challenging to find as much information. -- Just to add some tidbits of information for the environment. This setup above is referring to the data centers which supports two different locations which have about 100 users between the two in total. The storage traffic will be iSCSI and will be 3 ESXi Hosts and one SAN housing about 2.7TB of data. Since there is currently no storage network in place (no SAN), I'm having a hard time regarding #2 to really determine what backplane throughput and switch specifications will be sufficient.

    Read the article

  • How to shrink-to-fit an std::vector in a memory-efficient way?

    - by dehmann
    I would like to 'shrink-to-fit' an std::vector, to reduce its capacity to its exact size, so that additional memory is freed. The standard trick seems to be the one described here: template< typename T, class Allocator > void shrink_capacity(std::vector<T,Allocator>& v) { std::vector<T,Allocator>(v.begin(),v.end()).swap(v); } The whole point of shrink-to-fit is to save memory, but doesn't this method first create a deep copy and then swaps the instances? So at some point -- when the copy is constructed -- the memory usage is doubled? If that is the case, is there a more memory-friendly method of shrink-to-fit? (In my case the vector is really big and I cannot afford to have both the original plus a copy of it in memory at any time.)

    Read the article

  • CPU running at full capacity when boot to DOS?

    - by Kevin H
    Does the CPU is run at 100% or near full capacity when the computer is booted into MS-DOS? Will the CPU temperature become higher even though we are not running any program in DOS mode? In Windows, we can see the CPU usage in % of utilization in Task Manager. From what I heard, CPU is running at near 100% capacity in DOS OS or in the BIOS MAIN screen. Is this caused by lack of CPU optimization in DOS OS?

    Read the article

  • Android vend deux fois plus qu'iOS, mais ses développeurs gagneraient quatre fois moins d'après une estimation de Flurry

    Android vend deux fois plus qu'iOS, mais ses développeurs gagneraient quatre fois moins D'après Flurry A quelques jours du Google I/O et en plein WWDC d'Apple, l'étude est polémique et se doit d'être prise avec des pincettes. D'après Flurry, un cabinet d'analyse éditeur de solutions de marketing mobile, iOS bénéficierait d'une communauté de développeurs beaucoup plus fidèle et plus motivée que celle d'Android. En comparant les deux OS, l'étude affirme que sur 10 applications développées, 7 le sont pour les appareils d'Apple contre seulement 3 pour Android. [IMG]http://ftp-developpez.com/gordon-fowler/Etudes...

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

  • Do eSATA HDD docking stations have a capacity limit?

    - by Michael Kjörling
    I'm looking at perhaps buying an eSATA docking station to be able to easily plug in and unplug hard disk drives, particularly but not necessarily only for backup purposes. Note: This is not a hardware shopping recommendation question. Please don't vote to close it as such. Looking at different models, I find for example this page detailing the Deltaco SI-7908SUS which specifically states "storage capacity: 1.5 TB" as well as "pictured hard disk not included, only for illustration". A customer review specifically mentions that it does not work with 3 TB drives, although does not go into any detail such as OS, drive model, etc. From a brief glance, the vendor's web site does not appear to say either way. Then there is the quite similar Deltaco SI-7908B3 which boasts on the box "all 2.5" and 3.5" HDD/SSD compatible". My question is: Why would what basically amounts to a SATA/eSATA adapter have any say in what storage capacity devices are supported? Does it? Assuming the OS supports the full capacity of the drive, why should introducing another (not even a different, really) connector change anything? Bonus question: Might it make a difference if the docking station exposes multiple interfaces (such as in the case of for example the SI-7908SUS exposing USB 2.0 and eSATA)? (I still think it shouldn't, but it'd be nice to have it confirmed.)

    Read the article

  • Caveats of select/poll vs. epoll reactors in Twisted

    - by David
    Everything I've read and experienced ( Tornado based apps ) leads me to believe that ePoll is a natural replacement for Select and Poll based networking, especially with Twisted. Which makes me paranoid, its pretty rare for a better technique or methodology not to come with a price. Reading a couple dozen comparisons between epoll and alternatives shows that epoll is clearly the champion for speed and scalability, specifically that it scales in a linear fashion which is fantastic. That said, what about processor and memory utilization, is epoll still the champ?

    Read the article

  • Evaluating software estimates: sure signs of unrealistic figures?

    - by Totophil
    Whilst answering “Dealing with awful estimates” posted by Ash I shared a few tips that I learned and personally use to spot weak estimates. But I am certain there must be many more! What heuristics to use in the scenario when one needs to make a quick evaluation of software project estimate that has been compiled by a third-party (a colleague, a business partner or an external company)? What are the obvious and not so obvious signs of weak software estimates that can be spotted without much detailed knowledge of task at hand?

    Read the article

  • There are lots of useful answers about estimating the cost of a project. Are there any recommendatio

    - by Chrys
    Let me clarify this a bit more. I started giving estimations about projects/tasks. I write everything down in a spreadsheet. I know that soon this spreadsheet won't help much (searching, recommending similar project estimations etc...) Do you have any recommendations for any tools I can use for keeping a track of all these estimations? Is there a tool out there that for example will give me related project estimations like stackoverflow gives me related questions when I type one question.

    Read the article

  • How to make freelance clients understand the costs of developing and maintaining mature products?

    - by John
    I have a freelance web application project where the client requests new features every two weeks or so. I am unable to anticipate the requirements of upcoming features. So when the client requests a new feature, one of several things may happen: I implement the feature with ease because it is compatible with the existing platform I implement the feature with difficulty because I have to rewrite a significant portion of the platform's foundation Client withdraws request because it costs too much to implement against existing platform At the beginning of the project, for about six months, all feature requests fell under category 1) because the system was small and agile. But for the past six months, most feature implementation fell under category 2). The system is mature, forcing me to refactor and test everytime I want to add new modules. Additionally, I find myself breaking things that use to work, and fixing it (I don't get paid for this). The client is starting to express frustration at the time and cost for me to implement new features. To them, many of the feature requests are of the same scale as the features they requested six months ago. For example, a client would ask, "If it took you 1 week to build a ticketing system last year, why does it take you 1 month to build an event registration system today? An event registration system is much simpler than a ticketing system. It should only take you 1 week!" Because of this scenario, I fear feature requests will soon land in category 3). In fact, I'm already eating a lot of the cost myself because I volunteer many hours to support the project. The client is often shocked when I tell him honestly the time it takes to do something. The client always compares my estimates against the early months of a project. I don't think they're prepared for what it really costs to develop, maintain and support a mature web application. When working on a salary for a full time company, managers were more receptive of my estimates and even encouraged me to pad my numbers to prepare for the unexpected. Is there a way to condition my clients to think the same way? Can anyone offer advice on how I can continue to work on this web project without eating too much of the cost myself? Additional info - I've only been freelancing full time for 1 year. I don't yet have the high end clients, but I'm slowly getting there. I'm getting better quality clients as time goes by.

    Read the article

  • Calculating usage of localStorage space

    - by WmasterJ
    I am creating an app using the Bespin editor and HTML5's localStorage. It stores all files locally and helps with grammar, uses JSLint and some other parsers for CSS and HTML to aid the user. I want to calculate how much of the localStorage limit has been used and how much there actually is. Is this possible today? I was thinking for not to simply calculate the bits that are stored. But then again I'm not sure what more is there that I can't measure myself.

    Read the article

  • Maximum number of files one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the directory write to sub-directories such as ./a/b/c/abc.ext rather than just ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Project Management and Scheduling Techniques

    - by Alec Smart
    Hello, I know this is probably the nth project management question. But am trying to move my team onto a more robust project management technique. Am wondering what is the best technique to use? I know that probably no technique is best, but which are the most popular techniques? Poker planning? Evidence Based Scheduling? COCOMO? Agile? Scrum? XP? Which one to use? Also, suppose I use EBS, wouldn't it be too time consuming to break down every single activity into fine grained tasks? E.g. "Design" is a goal, what kind of fine-grained tasks will I have under it? Is this is a waste of time i.e. dividing work into so many micro parts. Usually when I give my programmers a task, I follow up every week, and they complete quite a lot of the task assigned to them (the tasks are very broad e.g. X module). Is EBS worth it? Are there any white-papers on it so that I can implement it on my own? (instead of using Fogbugz) Most of my projects are web-based projects. Thank you for your time.

    Read the article

  • Orbital equations, and power required to run them

    - by Adam Davis
    Due to a discussion on the SO IRC today, I'm curious about orbital mechanics, and The equations needed to solve orbital problems The computing power required to solve complex problems The question in particular is calculating when the Earth will plow into the Sun (or vice versa, depending on the frame of reference). I suspect that all the gravitational pulls within our solar system may need to be calculated, which makes me wonder what type of computer cluster is required, or can this be done on a single box? I don't have the experience to do a back of the napkin test here, but perhaps you do? Also, much thx to Gortok for the original inspiration (see comments).

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Is there any project estimator tool to give estimate for web design/ developemnt work?

    - by jitendra
    Is there any project estimator tool to give estimate for web design/ development work? I don't have to calculate Price jusr want to calculate estimated time. for things like Just for example Page creation (layout in XHTML) CSS creation Content creation (Word to HTML including images in some pages) Bulk PDF upload PHP Script for Form Testing all pages I need like Items Quantity Time for each task(min.) Estimated total (in hour) PDF upload x 30 = 2 min = 60 Min pages with images x 30 = 15 min for each = 60 Min is there any simple jquery calculator power with jquery . Where we can add add remove custom thing to calculate time.? or any other free online/offline tool

    Read the article

  • Project Implementation estimates with TDD

    - by panzerschreck
    Are there any guidelines when quoting estimates for projects/tasks involving TDD? For example, when compared to normal development of a task taking 1 day to complete, how much more should a TDD driven task take? 50% more time or 70% more time? Are there any statistics available, assuming the developer is well versed with the language and the test framework?

    Read the article

  • Correct way to textually report the remaining time on a long running process?

    - by Ryan
    So you have a long running process, perhaps with a progress bar, and you want a text estimate of the remaining time, eg: "5 minutes remaining" "30 seconds remaining" etc. If you don't actually want to report clock time (due to accuracy or resolution or update-rate issues) but want to stick to the text summary, what is the correct paradigm? Is "one minute" left displayed from 0 to 60 seconds? or from 1:00 to 1:59? Say there's 1:35 Left - is that "2 minutes remaining" or "1 minute remaining"? Do you just pare it down to "A few minutes left" when you're less than 3 minutes? What is the preferred (least user-frustrating) method?

    Read the article

  • Can I use HP Recovery Discs for a different hard drive capacity and make?

    - by Fasih Khatib
    About two years ago I created HP Recovery Discs (3 of them). Now my hard drive has crashed and new one is still a week from delivery. I was reading up on how to reinstall the genuine OS using the Recovery Discs as i was not given any Windows 7 installation discs. I did my bit of research after getting answers from the community on what these discs do and found out on other sites that people experience issues when recovering their OS from the disc. Especially when they change the make or capacity of the harddrive. Unfortunately I had to change the make as the hard drive that came built in has gone out of production. This question is just a part of my checklist to avoid problems when recovering the OS. I have: HP DV4-2126TX (available only in India I guess) I had: Seagate Momentus 320 GB I ordered: Western Digital Scorpio Black 500 GB Windows 7 Home Premium 32-bit Is there a possibility to encounter any problems due to the changed capacity and make? I only want my genuine OS and drivers – not my data. I was told that Disc 1 contained the OS and drivers, and the rest of the discs contained data. I couldn't verify that.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >