Search Results

Search found 480 results on 20 pages for 'estimate'.

Page 11/20 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Why does my screen dim on a desktop installation of Windows 7?

    - by Robert Cartaino
    Periodically, while using my Windows 7 Pro desktop installation, the screen suddenly dims. The brightness is about 75% normal (estimate). It's as if I am in a power-saving mode on a laptop running on batteries. But this is a full desktop installation. I know it is not a hardware glitch or monitor adjustment issue because the Windows cursor is still bright white while everything else goes dim. The Control Panel Power Options have not been changed. They are set to "Balanced [active]" and I have tried restoring the default settings. Flipping through the power and display settings, everything looks "normal." There is no screen saver or power-off-after settings apparent. Rebooting the system resets everything to full brightness but I can't find a way to restore it in Windows or to keep it from happening in the first place. Suggestions?

    Read the article

  • Business Intelligence (BI) Defined

    CIO.com defines Business Intelligence (BI) as a generic reference to a collection of applications that are used to analyze raw organizational data. Typical BI activities include data mining, online analytical processing, querying and reporting. They further explain that the primary reason why a company would utilize BI is to make their more data accessible. The more accessible data is to the users the faster they can identify ways to reduce business cost, discover new business opportunities, and react quickly to adjust prices based on current supply and demand. One area in which a hospital system could use BI derived from a data warehouse can be seen in the Emergency Room (ER) in regards to the number of doctors and nurse they have working during a full moon for each ER location. In order determine this BI needs to determine a trend in the number of patients seen on a full moon, further more they also need to determine the optimal number of staff members working during a full moon be determining the number of employees to patients ration needed to meet standard patient times and also be the most cost effective for the hospital.  This will allow the hospital system to estimate the number of potential patients they will have on the next full moon and adjust their staff schedules accordingly to ensure that patient care is not affected in any way do the influx or lack of influx of patients during this time while also ensuring that they are only working the minimum number of employees to ensure that they still making a profit. Another area where a hospital system could use BI data regards their orders paced to drug and medical supply companies. BI could define trends in prescriptions given to patients, this information could be used for ordering new supplies and forecasting the amount of medicine each hospital needs to keep on site at a given time. For example, a hospital might want to stock up on materials need to set bones in a cast prior to the summer because their BI indicates that a majority of broken bones occur during the summer due to children being out of school and they have more free time.

    Read the article

  • First Project a big one, How much should we charge?

    - by confuzzled
    Two of my cousins and I started a freelance computer repair/web design business just to make some money on the side during college, and received our first major web design project about three weeks ago. Now we've created websites before, but it was mostly for family businesses and have never really charged money, and most of the websites have been static, and don't really require a CMS. This project, however, was a big one (for us anyways). We created a news site that had several categories, we created the banners, we created a classifieds page (not a web app just something static that they control). Several links, a few graphical assets, CSS drop down menu, RSS feed from a different news site, weather, all the normal stuff you would find on a regular news site. On top of that we put in all the usual Joomla stuff (search, Jcomments, Jslide pictures, JCE, etc.). Then we uploaded the first 10 articles they gave us, and we are going to train them how to use Joomla. Now, at first we decided for 700 dollars. I assumed they just wanted a simple blog like website where they can upload articles. But then we had a meeting, and they asked for a lot more. Note: we did not hard code the template from scratch, but customized the gantry framework to fit their needs. We did code quite a bit however. I estimate that we put in about 50-60 hours in total. I'm wondering if 700 dollars is a bit low, this price is definitely not set in stone. Please keep in mind that this is our first project, and we are newbies, please be kind. Thank You!

    Read the article

  • Dealing with bad/incomplete/unclear specifications?

    - by eagerMoose
    I'm working on a project where our dev team gets the specifications from the business part of the company. Both the business management and the IT management require estimates and deadline projections, as they should. The good thing is that estimates are mostly made by the actual developers who get to do the required features. The bad thing is that the specifications are usually either too simple (it turns out you're left with a lot of question marks over your head because a lot of information seems to be missing) or too complex(up to the point that you can't even visualize where everything would "fit" in the app). More often than not, the business part of the specs are either incomplete or unaware of what can and can't be done (given the previously implemented business logic). Dev team is given about a day per new spec to give an estimate and we do try to clear uncertainties, usually by meeting up with whoever did the spec. Most of the times it turns out that spec writers haven't really thought everything through, and it's usually only when we start designing and developing that we end up in trouble, as a lot of the spec seems to have holes. How do you deal with this? Are you generous on estimates in advance?

    Read the article

  • How long do servers usually last?

    - by AX1
    I'm trying to estimate ROI for a new server and how long it will most likely be in service. Are there any average numbers for how long you can expect a server to last that's in 24/7 use? In other words, is it likely to find 10-year old server from 2010 in today's racks? How about servers from 2005? I'd love to get an idea for how long a server will most likely be in use. By the way, this is meant irrespective of growing or changing application needs. I'm talking only about the hardware.

    Read the article

  • Planning milestones and time

    - by Ignas
    I was hired by a marketing company a year ago initially for link building / SEO stuff, but I'm actually a Web developer and took the job just in desperation to have one (I'm still quite young and just finished 2nd year of University). From the 3rd day my boss realised that I'm not into that stuff at all and since he had an idea of a web based app we started to plan it. I estimated that it shouldn't take me longer than two months to do it, but as I was making it we soon realised that we want to add more and more stuff to make it even better. So the development on my own lasted for about 4 months, but then it became an enterprise size app and we hired another programmer to work along me. The guy was awesome at what he did, but because I was assigned to be programmer/project manager I had to set up milestones with deadlines and we missed most of them, because most of the time it was too much work, and my lack of experience kept me setting really optimistic deadlines. We still kept adding features and had changed the architecture of the application twice. My boss is a great guy and he gets that when we add features it expands the time frame in which things should be done so he wasn't angry at me nor the other guy. But I was feeling bad (I still am) that I suck at planning. I gained loads of experience from the programming side, but I still lack the management/planning skills which make me go nuts. So over the last year I have dedicated probably about 8 months of work to this app (obviously my studies affected it) and we're launching as a closed beta this month. So my question is how do I get better at planning/managing a project, how do you estimate the times? What do you take into consideration when setting goals. I'm working alone again because the other guy moved from the city. But I'm sure we'll be hiring to help me maintain it so I need to get better at it. Any hints, points or anything on the topic are appreciated.

    Read the article

  • Performance improvements of VBulletin by integrating the plugins

    - by reggie
    I have amassed quite a lot of plugins and code that is being hooked into VBulletin's plugin system. There are good uses for this system. But since I am now locked in with the VB 3 branch and it is no longer updated, I wonder what kind of performance improvements I would see if I integrated all the plugins into the vbulletin files and turned the plugin system completely off. My site has about 1.5 mio posts, about 100,000 threads, 100,000 members (of which 10,000 are "active"). I estimate I have about 200 plugins from different products in the plugin manager. Has anybody ever tried this move and could share the experiences?

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • Cinema4D for game development tutorials anyone ?

    - by George Profenza
    Hi, I started to learn Cinema 4D. I've noticed it's really easy to use for motion graphics, but I want to use it for modeling for games/realtime 3d engines. Before I used 3dsmax and it was easy to estimate how a model would look/behave in a 3d engine. The two main things I did was displaying Polygon triangles and displaying the Polygon Count. I've found the Total Polygons tick in HUD settings in Cinema 4D, but I can't find any display mode that will show triangles. Is there there a way to display triangle faces/not quads in Cinema4D ? If so how ? There is a Triangulate function, but I'd rather not Triangulate/Untriangulate all the time, especially since it's converting back and forth between the two doesn't always produce the same result.

    Read the article

  • Algorithms or patterns for a linked question and answer cost calculator

    - by kmc
    I've been asked to build an online calculator in PHP (and the Laravel framework). It will take the answers to a series of questions to estimate the cost of a home extension. For example, a couple of questions may be: What is the lie of your property? Flat, slightly inclined, heavily inclined. (these suggestive values could have specific values in the underlying calculator like, 0 degrees, 5 degrees, 10 degrees. What is your current flooring system? Wooden, or concrete? These would then impact the results of other questions. Once the size of the extension has been input, the lie of the land will affect how much site works will cost, and how much rubbish collection will cost. The second question will impact the cost of the extensions flooring, as stumping and laying floorboards is a different cost to laying foundations and a concrete slab. It will also influence what heating and cooling systems are available in the calculator. So it's VERY interlinked. The answer to any question can influence the options of other questions, and the end result. I'm having trouble figuring out an approach to this that will allow new options and questions to be plugged in at a later stage without having things too coupled. The Observer pattern, or Laravel's events may be handy, but currently the sheer breadth of the calculator has me struggling to gather my thoughts and see a sensible implementation. Are there any patterns or OO approaches that may help? Thanks!

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • Charging by the hour/project

    - by thesam18888
    This is related to a question I asked earlier - How to end a relationship with a client without pissing them off? What are your obligations when charging by the hour vs charging by project? If you agree to take on a project, give a rough estimate that it might take 10 days for you to work on and charge £X per hour - are you obligated to work for free after those 10 days are up and you have still not managed to complete your project due to unanticipated issues? What if you have delivered the project but bugs are found - should you fix these bugs for free if the 10 days are up or should you charge your client? Also, for the above project, what should be the result when you start on the project, but after the 10 days for whatever reason you have to give up and tell your client that you cannot do it anymore? I realise that this does nothing to build your reputation and relationship with the client but are you obligated to pay back the money paid to you or do you just deliver the half/nearly completed source code and help them find someone else to complete it? The reason I am asking the above questions is because I am very new to freelancing and would like to know how to deal with the above situations if they ever crop up. Thanks!

    Read the article

  • estimating size of reply HTTP headers

    - by Guanidene
    I am trying to download a file via a basic socket connection, using a HTTP GET request. So, I have to specify how many bytes of data coming in I have to read from the socket. However, I am having trouble deciding the amount of data (say, in bytes) the server would reply back with. I know that there is a "Content-Length" field in the reply sent by server, but that gives me the size of the actual data (without the http headers). Is there a way to get the exact size of HTTP headers sent by the server or an estimation is required? (I am doing this for downloading on a mobile network, where every bit of data matters in terms of time and money, so I don't wish to make an unnecessary larger estimate of the header size.)

    Read the article

  • Is Page-Loading Time Relevant?

    - by doug
    Take this (ServerFault) page for instance. It has about 20 elements. When the last of these has loaded, the page is deemed "loaded"--but not before. This is certainly the protocol used by our testing service (which is among the small group of well-known vendors that offer that sort of service). Obviously this method is based on a clear, definite endpoint--therefore it's easy to apply w/ concomitant reliability. I think it's also the metric used by the popular Firefox plugin, 'YSlow.' For my employer's website, nearly always the last-to-load items are tracking code, tracking pixels, etc., so from the user's point of view--their perception--the page was "loaded" well before it had actually loaded based on the criterion used by our testing service (15-20% is a rough estimate). I'm sure i'm not the first person to consider this nor the first to wonder if it is causing micro-optimization while ignoring overall system-level, or user-perceived performance. So my question is, are there are other more practical (yet still reasonably precise) measures of page loading time?

    Read the article

  • How can I compare effective power usage of two CPUs / CPU+Mobo+Mem combinations?

    - by einpoklum
    I have this server which does mostly file sharing (with the associated storage). No serious number crunching and it isn't the firewall. My current box has a Celeron D processor (Prescott 336 2.8 GHz); and I'm considering replacing it with a Pentium D (Smithfield 805 2.66 GHz) - for reasons which do not involve performance. How can I know whether one can expect a higher or lower power consumptipn for the change? And how can I estimate the power consumption for each option?

    Read the article

  • Assuming "clean code/architecture" is there a difference in "effort" between PHP or Java/J2EE web application development?

    - by PhD
    A client asked us to estimate effort when selecting PHP as the implementation language for his next web-based application. We spent about a week exploring PHP, prototyping, testing etc., We are quite new to this language - may have hacked around it in the past but, let's go with PHP-noobs but application development experts (for the lack of a better, less flattering word :) It seems, that if we write, clean maintainable code, follow separation of concerns, enterprise architecture patters (DAOs etc.) the 'effort' in creating an object-oriented PHP based web-application seems to be the same for a Java based one. Here's our equation for estimating the effort (development/delivery time): ConstructionEffort = f(analysis, design, coding, testing, review, deployment) We were specifically comparing effort estimates in creating an enterprise application with the following: PHP + CakePHP/CodeIgniter (should we have considered others?) Java + Spring + Restlet It's an end-to-end application: Client: Javascript/jQuery + HTML/CSS Middle tier/Business Logic - (Still evaluating PHP/Java) Database: MySQL The effort estimates of the 1st and 3rd tier are constant and relatively independent of the middle tier's technology. At a high level with an initial breakdown into user stories of the requested features as well as a high-level SWAG on the sheer number of classes/SLOC that would be required for PHP doesn't seem to differ by much from what is required of the same in Java. Is this correct? We are basing our initial estimates on the initial prototyping/coding we've done with PHP - we are currently disregarding fluency with the language as a factor, since that'll be an initial hurdle and not a long term impediment IMHO (we also have sufficient time to become quite fluent with PHP). I'm interested in knowing the programmers' perspective with respect to effort when creating similar applications with either of the languages to justify choosing one over the other. Are we missing something here? It seems we are going against popular belief of PHP being quicker to market (or we being very fluent with Java have our vision clouded). It doesn't seem to have any coding/programming effort saving from what we/ve played around with.

    Read the article

  • How is it possible for SSD's drives to have such a good latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives have such a low latency, whereas they are using NAND chips? (or at least lot better than what a typical single NAND chip would do) EDIT: I think under-estimate NAND chip capabilities. USB drives, while powered by NAND's are mostly limited by USB protocol (which have a pretty high latency) and the USB controller. That explain their poor performance in some cases.

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • Alternatives to time tracking methodologies [closed]

    - by Brandon Wamboldt
    Question first: What are some feasible alternatives to time tracking for employees in a web/software development company, and why are they better options Explanation: I work at a company where we work like this. Everybody is paid salary. We have 3 types of work, Contract, Adhoc and Internal (Non billable). Adhoc is just small changes that take a few hours and we just bill the client at the end of the month. Contracts are signed and we have this big long process, the usual. We figure out how much to charge by getting an estimation of the time involved (From the design and the developers), multiplying it by our hourly rate and that's it. So say we estimate 50 hours for a website. We have time tracking software and have to record the time in 15 we spend on it (7:00 to 7:15 for example), the project name, and give it some comments. Now if we go over the 50 hours, we are both losing money and are inefficient. Now that I've explained how the system works, my question is how else can it be done if a better method exists (Which I'm sure one must). Nobody here likes the current system, we just can't find an alternative. I'd be more than willing to work after hours longer hours on a project to get it done in time, but I'm much inclined to do so with the current system. I'd love to be able to sum up (Or link) to this post for my manager to show them why we should use abc system instead of this system.

    Read the article

  • Efficiently installing fully-patched Windows XP, IE, and Office 2007 on an isolated PC

    - by JPaget
    I have been tasked to install Windows XP, IE, and Office 2007 on a computer that will become part of a standalone network not connected to the Internet. What is a good way to install all of the security updates? I'm installing from CD's of Windows XP SP2 and MS Office 2007. Next I plan to download Windows XP SP3 and Office 2007 SP2, burn them to CD's, and install both service packs. Finally I plan to go to the Microsoft Download Center and download all applicable security updates, burn then to CD, and install them. I estimate that there are over 100 of these updates. Is there a more efficient way to do this?

    Read the article

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • What's the best way to work out if a virtual server is overloaded?

    - by zemaj
    I have a series of virtual servers. I'm running a command to login to each one and take a look at the load averages using uptime. What's the best way to work out if load values represent overloading? I'm running on rackspace cloud, so the servers have burst capability and can be all different sizes. I'm a little stumped on how to come up with a consistent way of figuring out when I need to spin up new servers. I can do things like estimate the jobs running on each one, but I'd like a system that runs a little closer to the real resource use available on each instance, as it obviously varies quite a bit! Help greatly appreciated!

    Read the article

  • Avoiding bloated Domain Objects

    - by djcredo
    We're trying to move data from our bloated Service layer into our Domain layer using a DDD approach. We currently have a lot of business logic in our services, which is spread out all over the place and doesn't benefit from inheritance. We have a central Domain class which is the focus of most of our work - a Trade. The Trade object will know how to price itself, how to estimate risk, validate itself, etc. We can then replace conditionals with polymorphism. Eg: SimpleTrade will price itself one way, but ComplexTrade will price itself another. However, we are worried that this will bloat the Trade class(s). It really should be in charge of its own processing but the class size is going to increase exponentially as more features are added. So we have choices: Put processing logic in Trade class. Processing logic is now polymorphic based on the type of the trade, but Trade class is now has multiple responsibilites (pricing, risk, etc) and is large Put processing logic into other class such as TradePricingService. No longer polymorphic with the Trade inheritance tree, but classes are smaller and easier to test. What would be the suggested approach?

    Read the article

  • Gathering application architecture

    - by userbb
    Suppose there is system for gathering info about system activities. There is a client part with an interface and there are agent parts that are installed on each machine. I estimate that there could be max 20 computers now. Later could be more like 50. My solutions: Agent stores data into local database e.g. sqlite. There is also a service which can be used by a client to query data. So if a client wants to display data for 50 computers, he sends a query to 50 computers. I'am on that solution now but maybe it's totally wrong. Agent stores data into local database (I don't known good one for that). There is also server (main database) and local databases are synchronized with the server. In this case, a client connects to the main database to display data. Agent sends data in realtime to main database. So same as point 2, but there is no sync. Like in point 3, but agent buffers data in local database and sends it in small chunks to main database. What is the best approach?

    Read the article

  • Blurred refurbished TFT

    - by PeterMmm
    I got 3 refurbished PC+TFT 19". All TFT shows a very blurry image. I estimate the TFT's are 2-3 yrs old. The PC's running Windows7 Pro. The resolution is set to the TFT native values. It could be possible that all TFT's are broken but I have similiar models that are up to 5 yrs old, without any issue. I still think it could be a config issue but from the hardware it is possible that a TFT get broken I shows up a very blurry image. Update TFT HP 2035, grafic Intel Q35/GMA 3100, analog D-SUB connector, Manuf. date Sep 2005. Config. resolution 1600x1200, PPP 150% Without ClearType it is worst. Desktop Icon titles seems to be good and clear. But in Notepad for example the effect is that on the right of the characters is a pinky shadow.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >