Search Results

Search found 279 results on 12 pages for 'predict'.

Page 4/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Generating video or images of geometrical objects from data

    - by Jonathan Barbero
    Hello, I'm working in a course's project to predict the velocity and position of the solar system planets (and other objects). It will be really cool if I can visualize the predicted objects data, if it's possible generating 3D images, if in video that's amazing. Do you know any library that lets me to use this data to generate an image or video? (I don't care in which language) Data: - simulation step (time line step for a video) - positions of the objects - radius and/or colours of the objects Thanks in advance, any suggestion is welcome.

    Read the article

  • Model Fit of Binary GLM with more than 1 or 2 predictors

    - by Salmo salar
    I am trying to predict a binary GLM with multiple predictors. I can do it fine with one predictor variable however struggle when I use multiple Sample data: structure(list(attempt = structure(c(1L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L), .Label = c("1", "2"), class = "factor"), searchtime = c(137, 90, 164, 32, 39, 30, 197, 308, 172, 48, 867, 117, 63, 1345, 38, 122, 226, 397, 0, 106, 259, 220, 170, 102, 46, 327, 8, 10, 23, 108, 315, 318, 70, 646, 69, 97, 117, 45, 31, 64, 125, 17, 240, 63, 549, 1651, 233, 406, 334, 168, 127, 47, 881), mean.search.flow = c(15.97766667, 14.226, 17.15724762, 14.7465, 39.579, 23.355, 110.2926923, 71.95709524, 72.73666667, 32.37466667, 50.34905172, 27.98471429, 49.244, 109.1759778, 77.71733333, 37.446875, 101.23875, 67.78534615, 21.359, 36.54257143, 34.13961111, 64.35253333, 80.98554545, 61.50857143, 48.983, 63.81072727, 26.105, 46.783, 23.0605, 33.61557143, 46.31042857, 62.37061905, 12.565, 42.31983721, 15.3982, 14.49625, 23.77425, 25.626, 74.62485714, 170.1547778, 50.67125, 48.098, 66.83644444, 76.564875, 80.63189189, 136.0573243, 136.3484, 86.68688889, 34.82169565, 70.00415385, 64.67233333, 81.72766667, 57.74522034), Pass = structure(c(1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 2L), .Label = c("0", "1"), class = "factor")), .Names = c("attempt", "searchtime", "mean.search.flow", "Pass"), class = "data.frame", row.names = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 25L, 26L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L, 37L, 38L, 39L, 40L, 50L, 51L, 53L, 54L, 60L, 61L, 62L, 63L, 64L, 65L, 66L, 67L, 68L, 69L, 70L, 71L, 72L)) First model with single predictor M2 <- glm(Pass ~ searchtime, data = DF3, family = binomial) summary(M2) drop1(M2, test = "Chi") Plot works fine P1 <- predict(M2, newdata = MyData, type = "link", se = TRUE) plot(x=MyData$searchtime, exp(P1$fit) / (1+exp(P1$fit)), type = "l", ylim = c(0,1), xlab = "search time", ylab = "pobability of passage") lines(MyData$searchtime, exp(P1$fit+1.96*P1$se.fit)/ (1 + exp(P1$fit + 1.96 * P1$se.fit)), lty = 2) lines(MyData$searchtime, exp(P1$fit-1.96*P1$se.fit)/ (1 + exp(P1$fit - 1.96 * P1$se.fit)), lty = 2) points(DF3$searchtime, DF3$Search.and.pass) Second model M2a <- glm(Pass ~ searchtime + mean.search.flow+ attempt, data = DF3, family = binomial) summary(M2a) drop1(M2a, test = "Chi") How do I plot this with "dummy" data? I have tried along the lines of Model.matrix and expand.grid, as you would do with glmer, but fail straight away due to the two categorical variables along with factor(attempt)

    Read the article

  • Why "Finalize method should not reference any other objects" ?

    - by mishal153
    I have been pondering why it is recommended that we should not release managed resources inside finalize. If you see the code example at http://msdn.microsoft.com/en-us/library/system.gc.suppressfinalize.aspx , and search for string "Dispose(bool disposing) executes in two distinct scenarios" and read that comment, you will understand what I mean. Only possibility I can think of is that it probably has something to do with the fact that it is not possible to predict when finalizer will get called. Does anyone know the right answer ? thanks, mishal

    Read the article

  • SSI or PHP Include()?

    - by Ozzy
    Hi all, basically i am launching a site soon and i predict ALOT of traffic. For scenarios sake, lets say i will have 1m uniques a day. The data will be static but i need to have includes aswell I will only include a html page inside another html page, nothing dynamic (i have my reasons that i wont disclose to keep this simple) My question is, performance wise what is faster or

    Read the article

  • Combining ASP.NET controls with CSS

    - by Sniffer
    I am relatively new to website design and specifically working in ASP.NET, i am using CSS to style my site, but when i use ASP.NET Controls like GridView, Navigation controls, etc ... they are messed up by the style sheets, and you can't see that until you run the website, because the controls are translated to HTML and so affected by CSS in a way that you can't predict, how to solve this, and is there a better way to layout and desgin sites in ASP.NET.

    Read the article

  • how to save a fitted R model for later use

    - by ahala
    Sorry for this novice question: if I fit a lm() model or loess() model, and save the model somewhere in a file or in database, for later using by third party with predict() method, do I have to save the entire model object? Since returned model object contains orginal raw data, this returned object can be huge.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • How do I make Chrome's Omnibar behave more like the Firefox Awesome bar?

    - by Agnel Kurian
    One of my favorite features of the Firefox awesome bar is that I can simply type a substring of any URL or page title in my history and it finds all matches sorted by how frequently they were accessed. Example: I simply type "ask" when I want to ask something on stackoverflow.com., "inbox" goes to my GMail Inbox and so on because the substring matches any part of the URL or the page title. Chrome's Omnibar is quite frustrating in this area. I am not able to predict what it's gonna fetch and I seem to have no way to train the thing to do my bidding. I have turned unchecked the option that says: "Use a suggestion service to help complete searches and URLs typed..." but there has been no noticeable improvement. Any clues how I can make the Omnibar behave?

    Read the article

  • Windows Server 2003 seems to pick the 'outgoing' IP address at random from all the ones configured in IIS, how can I make it just use one?

    - by Ryan
    We have multiple sites in IIS with different IP addresses. This is cool, want different IPs to all go to this server and use the proper site. However I discovered an issue that when the server makes an outgoing connection, I cannot predict which IP it will use. I had to have one client add ALL the IPs to their firewall so that a certain service could communicate with their server. Well now the time has come to add another IP/site to IIS but I had told them they would not need to add any more IPs. So the question is, how can I make Windows Server 2003 use only ONE specific IP for outgoing calls instead of it being unpredictable? If this is not a good enough description, when I was RDPed into the server and I opened IE and went to 'what is my IP' it was sometimes different which is how I discovered why the one client's firewall was suddenly refusing the connections. How can I just make outgoing calls originate from a static IP yet still allow multiple IPs pointing to different sites in IIS?

    Read the article

  • What kind of storage do people actually use for VMware ESX servers?

    - by Dirk Paessler
    VMware and many network evangelists try to tell you that sophisticated (=expensive) fiber SANs are the "only" storage option for VMware ESX and ESXi servers. Well, yes, of course. Using a SAN is fast, reliable and makes vMotion possible. Great. But: Can all ESX/ESXi users really afford SANs? My theory is that less than 20% of all VMware ESX installations on this planet actually use fiber or iSCS SANs. Most of these installation will be in larger companies who can afford this. I would predict that most VMware installations use "attached storage" (vmdks are stored on disks inside the server). Most of them run in SMEs and there are so many of them! We run two ESX 3.5 servers with attached storage and two ESX 4 servers with an iSCS san. And the "real live difference" between both is barely notable :-) Do you know of any official statistics for this question? What do you use as your storage medium?

    Read the article

  • Task bar remains visible with "Auto-hide the task bar" checked in Windows 7.

    - by Corey
    It's about time that I figure this out. I can say with a pretty high confidence that I have experienced this issue in all consumer versions of Windows since XP. I keep "Auto-hide the task bar" checked to maximize screen real estate. Every once in a while, the task bar will refuse to hide while individual windows will continue to act as if that option is checked (by falling under the task bar). For years, I have fixed this by rebooting. Of course, I cannot predict the timing or frequency of the problem, so the process becomes burdensome. I want to know how this can be fixed without rebooting. It has affected my on multiple machines using multiple versions of Windows, so I cannot be the only one who is bothered by it. Can anyone help me solve this?

    Read the article

  • Make Chrome's Omnibar behave more like the Firefox AwesomeBar

    - by Agnel Kurian
    One of my favorite features of the Firefox AwesomeBar is that I can simply type a substring of any URL or page title in my history and it finds all matches sorted by how frequently they were accessed. Example: I simply type "ask" when I want to ask something on stackoverflow.com., "inbox" goes to my GMail Inbox and so on because the substring matches any part of the URL or the page title. Chrome's Omnibar is quite frustrating in this area. I am not able to predict what it's gonna fetch and I seem to have no way to train the thing to do my bidding. I have unchecked the option that says: "Use a suggestion service to help complete searches and URLs typed..." but there has been no noticeable improvement. Any clues how I can make the Omnibar behave?

    Read the article

  • Make Chrome's Omnibar behave more like the Firefox AwesomeBar

    - by Agnel Kurian
    One of my favorite features of the Firefox AwesomeBar is that I can simply type a substring of any URL or page title in my history and it finds all matches sorted by how frequently they were accessed. Example: I simply type "ask" when I want to ask something on stackoverflow.com., "inbox" goes to my GMail Inbox and so on because the substring matches any part of the URL or the page title. Chrome's Omnibar is quite frustrating in this area. I am not able to predict what it's gonna fetch and I seem to have no way to train the thing to do my bidding. I have unchecked the option that says: "Use a suggestion service to help complete searches and URLs typed..." but there has been no noticeable improvement. Any clues how I can make the Omnibar behave?

    Read the article

  • Ubuntu - Automatically mount external drives to /media/LABEL on boot without a user logged in?

    - by endolith
    This question is similar, but kind of the opposite of what I want. I want external USB drives to be mounted automatically at boot, without anyone logged in, to locations like /media/<label>. I don't want to have to enter all the data into fstab, partially because it's tedious and annoying, but mostly because I can't predict what I'll be plugging into it or how the partitions will change in the future. I want the drives to be accessible to things like MPD, and available when I log in with SSH. gnome-mount seems to only mount things when you are locally logged into a Gnome graphical session.

    Read the article

  • Distribute terrabyte files to the public from web server

    - by MarkJ
    Hi We need to set up a website which makes two or three large files publicly available - the files will be 1 or 2 terrabytes each. Although they will be public, in practise I expect only a relatively small number of scientists will want to download them. What is the best way to allow this? I've had a quick talk to a web-hosting provider (rackspace) and they suggested a hybrid solution. An entry-level managed server (we predict fairly low traffic for the website, but we do need to install some custom CGI software). Some cloud storage which hooks into Limelight Networks. This would host the large files, for download by FTP. It sounded OK to me but I know relatively little about server administration. Does it make sense? Thanks in advance, Mark

    Read the article

  • LVM, Soft RAID1, and Replication?

    - by mtkoan
    Hi all, I am practicing putting together a HA file server. It is a linux server with 2 1.5TB Hard drives. My plan is to use LVM to manage the physical volumes into logical volumes for /, /home, and /var. Then use md (soft RAID 1) to mirror the image onto the second HDD, THEN use DRDB to mirror the entire setup another server. Is this overkill? Would I just be okay with just md and DRDB? The system will serve user's homedirs (~100) and probably some groupware or other local intranet. On my own machines I've always separated root and /home partitions in case I break something, I can easily reinstall the OS. Should I follow that same theory here? If so I need LVM, because I really can't predict where we'll need more space, /var or /home.

    Read the article

  • Is there a way to tell if a file is done copying?

    - by Mike Cooper
    The scenario is this: Machine A has files I want to copy to Machine C. Machine A can't access C directly, but can access Machine B that can access Machine C. I am using scp to copy from Machine A to B, and then from B to C. Machine B has limited storage space, so as files come in, I need to copy them to C and delete them from B. The second copy is much faster, so this is no problem with bandwidth. I could do this by hand, but I am lazy. What I would like is to run a script on B or C that will copy each file to C as each one finishes. The scp job is running from A. So what I need is a way to ask (preferably from a bash script) if file X.avi is "done" copying. Each of these files is a different size, and I can't really predict size or time of completion. Edit: by the way, the file transfer times are something about 1 hour from A to B and about 10 minutes from B to C, if time scale matters at all.

    Read the article

  • How do I know if my disks are being hit with too many I/O reads or writes or both?

    - by Mark F
    I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. How do I really know what the max I/O numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an I/O bound problem. Is there a better way to predict when I/O is reaching its limits? This article was interesting but not giving the answer I desire. So, is my best bet surrounding just looking at 'CPU I/O WAIT'? There must be a more reactive method than this.

    Read the article

  • Can I group rows to get sum using excel

    - by Matt
    I have a spreadsheet with 2 cols of importance. Date, and number. I can't always predict the number of rows or the date, but what I would like to do is print out the sum of the numbers for each date. For example, there might be 5 rows for Dec-7: 200, 111 and Dec-6: 222,533,100. I am tying to create a list which would show Dec-6: 855, Dec-7: 311. I believe a Pivot Table is what I want but I can't quite figure out how I need to configure it to show what I want. If anyone knows of a guide I could look at that would be fantastic!

    Read the article

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • How do I know if my disks are being hit with too much IO reads or writes or both?

    - by Mark F
    Hi All, So I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. But how do I really know what the max IO numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an IO bound problem. Is there a better way to predict when IO is reaching its limits? This article was interesting but not giving the answer I desire. "http://serverfault.com/questions/61510/linux-how-can-i-see-whats-waiting-for-disk-io". So is my best bet surrounding just looking at 'CPU IO WAIT'? There must be a more reactive method for this? Best, M

    Read the article

  • Excel - Reuse a trend line to apply to other data

    - by milko
    I've obtained a trend line from a particular set of data. What I'd like to do now is to reuse this trend line to predict values from a given pair (x,y) of coordinates. To put it another way, I have one pair (x,y) that I know is correct for sure. I don't know any other point. Let's assume the behavior of this new set is similar to the one I've got the trend line from. Is there any way Excel could compute other points following this trend line?

    Read the article

  • Windows Server 2003 seems to pick the 'outgoing' IP address at random from all the ones configured in IIS, how can I make it just use one?

    - by ioSamurai
    We have multiple sites in IIS with different IP addresses. This is cool, want different IPs to all go to this server and use the proper site. However I discovered an issue that when the server makes an outgoing connection, I cannot predict which IP it will use. I had to have one client add ALL the IPs to their firewall so that a certain service could communicate with their server. Well now the time has come to add another IP/site to IIS but I had told them they would not need to add any more IPs. So the question is, how can I make Windows Server 2003 use only ONE specific IP for outgoing calls instead of it being unpredictable? If this is not a good enough description, when I was RDPed into the server and I opened IE and went to 'what is my IP' it was sometimes different which is how I discovered why the one client's firewall was suddenly refusing the connections. How can I just make outgoing calls originate from a static IP yet still allow multiple IPs pointing to different sites in IIS?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >