Search Results

Search found 24403 results on 977 pages for 'matt case'.

Page 457/977 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Error 502 in OpenOfficeSpreadsheet formula

    - by cody
    The formula failing is the following: =IF(TIMEVALUE(C2 & ":00") > TIMEVALUE(B2 & ":00"); 0; C2-B2) I previously tried =IF(C2 > B2; 0; C2-B2) but this also gives me "Error 502". The cells it is referring to contains data in the format "12:30" (I formatted the columns with format "HH:MM"). I just want to calculate how much time lies between two times, respecting the special case where endtime < starttime.

    Read the article

  • Integration error in high velocity

    - by Elektito
    I've implemented a simple simulation of two planets (simple 2D disks really) in which the only force is gravity and there is also collision detection/response (collisions are completely elastic). I can launch one planet into orbit of the other just fine. The collision detection code though does not work so well. I noticed that when one planet hits the other in a free fall it speeds backward and goes much higher than its original position. Some poking around convinced me that the simplistic Euler integration is causing the error. Consider this case. One object has a mass of 1kg and the other has a mass equal to earth. Say the object is 10 meters above ground. Assume that our dt (delta t) is 1 second. The object goes to the height of 9 meters at the end of the first iteration, 7 at the end of the second, 4 at the end of the third and 0 at the end of the fourth iteration. At this points it hits the ground and bounces back with the speed of 10 meters per second. The problem is with dt=1, on the first iteration it bounces back to a height of 10. It takes several more steps to make the object change its course. So my question is, what integration method can I use which fixes this problem. Should I split dt to smaller pieces when velocity is high? Or should I use another method altogether? What method do you suggest? EDIT: You can see the source code here at github:https://github.com/elektito/diskworld/

    Read the article

  • if/else statements or exceptions

    - by Thaven
    I don't know, that this question fit better on this board, or stackoverflow, but because my question is connected rather to practices, that some specified problem. So, consider an object that does something. And this something can (but should not!) can go wrong. So, this situation can be resolved in two way: first, with exceptions: DoSomethingClass exampleObject = new DoSomethingClass(); try { exampleObject.DoSomething(); } catch (ThisCanGoWrongException ex) { [...] } And second, with if statement: DoSomethingClass exampleObject = new DoSomethingClass(); if(!exampleObject.DoSomething()) { [...] } Second case in more sophisticated way: DoSomethingClass exampleObject = new DoSomethingClass(); ErrorHandler error = exampleObject.DoSomething(); if (error.HasError) { if(error.ErrorType == ErrorType.DivideByPotato) { [...] } } which way is better? In one hand, I heard that exception should be used only for real unexpected situations, and if programist know, that something may happen, he should used if/else. In second hand, Robert C. Martin in his book Clean Code Wrote, that exception are far more object oriented, and more simple to keep clean.

    Read the article

  • iTunes command+tab does not bring main window the foreground after command+h

    - by ecoffey
    Related to : Cmd-Tab does not bring iTunes to foreground There people claim that command+h will continue to work, but the behavior I'm seeing is: Launch a full screen Terminal instance (or anything else really) Launch a fresh iTunes command+h to hide it command+tab back to iTunes, this works and the main app window is given focus command+tab to Terminal command+tab to iTunes What I expect to happen: the main iTunes window is brought to the foreground and given focus. What does happen: The application menu bar shows iTunes, but an additional command+` is needed to bring the main app to foreground. I thought it might have been related to a Chrome interaction, since I'm usually commmand+tabbing between my browser and everything else, but that does not seem to be the case. I do have two "plugins" running: last.fm scrobbler Alfred app mini itunes player interface Not sure if either of those create some phantom window or something that is mixing something up. Versions of stuff: OS X: 10.6.5 iTunes: 10.1.1 (4) last.fm: can't find it, but it's recent Alfred: 0.8 (89) So after that big long rant, anyone else seeing this behavior?

    Read the article

  • Interfaces and Virtuals Everywhere????

    - by David V. Corbin
    First a disclaimer; this post is about micro-optimization of C# programs and does not apply to most common scenarios - but when it does, it is important to know. Many developers are in the habit of declaring member virtual to allow for future expansion or using interface based designs1. Few of these developers think about what the runtime performance impact of this decision is. A simple test will show that this decision can have a serious impact. For our purposes, we used a simple loop to time the execution of 1 billion calls to both non-virtual and virtual implementations of a method that took no parameters and had a void return type: Direct Call:     1.5uS Virtual Call:   13.0uS The overhead of the call increased by nearly an order of magnitude! Once again, it is important to realize that if the method does anything of significance then this ratio drops quite quickly. If the method does just 1mS of work, then the differential only accounts for a 1% decrease in performance. Additionally the method in question must be called thousands of times in order to produce a meaqsurable impact at the application level. Yet let us consider a situation such as the per-pixel processing of a graphics processing application. Here we may have a method which is called millions of times and even the slightest increase in overhead can have significant ramification. In this case using either explicit virtuals or interface based constructs is likely to be a mistake. In conclusion, good design principles should always be the driving force behind descisions such as these; but remember that these decisions do not come for free.   1) When a concrete class member implements an interface it does not need to be explicitly marked as virtual (unless, of course, it is to be overriden in a derived concerete class). Nevertheless, when accessed via the interface it behaves exactly as if it had been marked as virtual.

    Read the article

  • Brother MFC-J470DW scan function "Check Connection"

    - by user292599
    I have a Brother MFC-J470DW printer that I have connected to a Linux desktop (running Ubuntu 14.04) using a wireless router network. The printer works fine for printing and copying, but now I want to add the scan function. To set up the scan function, I went to the Brother web page for this printer: http://support.brother.com/g/b/downloadlist.aspx?c=eu_ot&lang=en&prod=mfcj470dw_us_eu_as&os=128 and under Scanner Drivers selected "Scanner driver 64bit (deb package)", "Scan-key-tool 64bit (deb package)", and "Scanner Setting file (deb package)". For each package, I clicked the EULA, and selected "open with Ubuntu Software Center". Then after the USC window pops up, I click on Install and the red line goes from left to right. In each case, the USC window then had a green checkmark and the Install box changes to Reinstall (that's how you know it worked). So now I try it out. Hitting the Scan button on the printer, selecting "Scan to file", and hitting ok produces the message "Check Connection". I checked the Brother Linux Information FAQ (scanner) page and the 14th question seems the same as mine: When I try to use the scan key on my network connected machine, I receive the error "Check connection" or I can not select anything except "scan to FTP". I explored the solution given for this FAQ, but found from ifconfig that I am already using eth0, the default setting, so presumably that is not the problem. I also found brscan-skey installed in /usr/bin and did drrm@drrmlinux2:~$ brscan-skey -t drrm@drrmlinux2:~$ brscan-skey but that didn't help - I still get the "Check connection" message. What can you suggest to fix this problem?

    Read the article

  • How to setup certificate authentication for MS SQL server 2008 R2 ?

    - by Stephane
    Hello, I have to connect an (ADO) application running on a standalone Windows 2003 R2 server to a SQL 2008 R2 database that is a member of the domain. I have setup an SQL authentication account for this and hard-coded the password into the connection string but I wonder if it wouldn't be possible to use certificate-based authentication for this instead. I haven't been able to find any documentation regarding this apparently new functionality of SQL 2008 R2 anywhere. Could someone kindly point me at some good documentation ? Or at least a description of the functionality and whether it could be used in my case or not ? Thank you in advance

    Read the article

  • Port Forwarding: Why do my local sites on 80 work but not those on 8080?

    - by Chadworthington
    I setup my router to forward port 80 to the PC hosting my web site. As a result, I am able to access this url (Don't bother clicking on it, it's just an example): http://my.url.com/ When i click on this link, it works: http://localhost:8080/tfs/web/ I also forward port 8080 to the same web server box But when I try to access this url I get the eror "Page Cannot be displayed:" http://my.url.com:8080/tfs/web/ I fwded port 8080 the same way I fwded port 80. I also turned off Windows Firewall, in case it was blocking port 8080. Any theories why port 80 works but 8080 does not?

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • List of MD /Raid/LVM (Devices) = How to mount them without any further information available?

    - by Jens
    Hello Expets, I do not have much skills in linux and installed a system two years ago that I now had to reboot, but it seems I did not automate everything with start-scripts... My Problem: I miss some mountpoints. I have a list of my raids (excerpt:) md3 : active (auto-read-only) raid1 sda6[0] sdb6[1] 97659008 blocks [2/2] [UU] md4 : active (auto-read-only) raid1 sda7[0] sdb7[1] 250099776 blocks [2/2] [UU] and it seems md3 and md4 are NOT mounted. However i do NOT have any entries for them fstab file. What should I do next. I do NOT know which filesystem they have (most likely ext3). =Can I savely try to mount them with (mount -t ext3 /dev/md3 /mnt/mymntpoint) or will the lead to corrupted data, in case they are not ext3? What should I do next (based on the information given above). The goal is to remount these Devices again, but I do not know anything about them anymore... Thank you very much Jens

    Read the article

  • Cron: job starts but doesn't complete

    - by Guandalino
    I have a problem with a cron job which starts but doesn't complete. Running the command manually works fine. I already read the page about cron issues and solutions here on AskUbuntu, tried the proposed solutions but didn't find an answer working in my case. I'm using Ubuntu 12.04. $ crontab -e SHELL=/bin/bash # otherwise it would be /bin/sh 59 16 * * * /bin/duply calendar backup > /tmp/duply.log Btw, the cron file ends with an empty line, as someone pointed out. Once the job has "finished"...: $ cat /tmp/duply.log Start duply v1.5.7, time is 2012-06-22 16:59:01. Instead, running manually the script it works correctly and gives this output: Start duply v1.5.7, time is 2012-06-22 17:06:39. [cut] ... here is a long output generated by duply. ... and yes, files have been backed up. [cut] --- Finished state OK at 17:06:42.581 - Runtime 00:00:03.170 --- I also tried to restart the cron daemon (sudo service cron restart) but nothing changed. Do you have any suggestion to fix the issue?

    Read the article

  • Should I be running my scheduled backups as SYSTEM or as the our domain admin?

    - by MetalSearGolid
    I have a daily backup which is scheduled through the Task Scheduler. It failed with a strange error code last night, but I was able to search and find a blog post with how to avoid the error in the future. However, one of his recommendations was to run the backups as the Administrator user of the domain. Since all of the files being backed up are local to this system, should I continue to have the backups run as SYSTEM? Or is it actually better to run it as a different user? I have been running these backups for well over a year now and have only had a handful of failures, but ironically when it does fail, the error code means it was a permissions issue (or so I read, this code seems to be undocumented by Microsoft). Thanks in advance for any insight into this. Might as well post the error code here too, in case anyone would like to share their insight on this as well, but I rarely ever get this error, so I don't care too much about it: 4294967294

    Read the article

  • What tools should every programmer know?

    - by acidzombie24
    What are some tools every programmer know about? Some examples i thought were Source control. (No explanation needed) Profiler. Many could go without but its good to know how to use one when the occasion arise What else? I was thinking a bug report software but i havent used one so i wasnt sure. Should programmers know how to use TRAC? I remember in the past a person telling me if i was making a shared library i should know (Some Name) which generates docs from the source (in that case C++). What was that called and what else should i know about? -edit- what about team management software? any software you could not live without in a specific project would be a good mention. I'll also mention i use VMs frequently during the prototype or end phase to see if there are any issues on a clean XP or linux distro and if i forgot anything in my redistribution. I cant imagine the end/release and testing phase of a project without a VM.

    Read the article

  • Music While Coding [closed]

    - by inspectorG4dget
    Hi SO, Generally, while I'm coding, I prefer to listen to some background music. Nothing that'll get me distracted, but something that'll help keep the rhythm and isn't counterproductive when I need to stop coding to debug or to think of a way to solve a small problem that stands in the way of progress. Now, I have read some similar questions on reddit and on SO - specifically: which songs do you find most productive to listen to while coding, Music while programming and more. Sadly a lot of these questions were closed as off-topic, etc. But (1) I don't think this question is off-topic and I think that a lot of programmers can benefit from it. (2) It's a real question. I really want to know what music you guys would recommend because music helps when I'm coding. It's sad that SO: Music to listen to while coding cannot be found and this isn't of much help. I hope this doesn't get closed. PS: I want to turn this into a community wiki, but I don't seem to know how. I'd appreciate any help. Thank you, all. In response to kirk.burleson's comment: In case the question isn't already clear, I'm asking for recommendations/opinions of music to listen to while coding. I would like to know what you listen to when you code so that I can try it too. I am running out of good "coding music" and this is a problem for me because good "coding music" helps me code better.

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Does it make sense to write a build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • Thick models Vs. Business Logic, Where do you draw the distinction?

    - by TokenMacGuy
    Today I got into a heated debate with another developer at my organization about where and how to add methods to database mapped classes. We use sqlalchemy, and a major part of the existing code base in our database models is little more than a bag of mapped properties with a class name, a nearly mechanical translation from database tables to python objects. In the argument, my position was that that the primary value of using an ORM was that you can attach low level behaviors and algorithms to the mapped classes. Models are classes first, and secondarily persistent (they could be persistent using xml in a filesystem, you don't need to care). His view was that any behavior at all is "business logic", and necessarily belongs anywhere but in the persistent model, which are to be used for database persistence only. I certainly do think that there is a distinction between what is business logic, and should be separated, since it has some isolation from the lower level of how that gets implemented, and domain logic, which I believe is the abstraction provided by the model classes argued about in the previous paragraph, but I'm having a hard time putting my finger on what that is. I have a better sense of what might be the API (which, in our case, is HTTP "ReSTful"), in that users invoke the API with what they want to do, distinct from what they are allowed to do, and how it gets done. tl;dr: What kinds of things can or should go in a method in a mapped class when using an ORM, and what should be left out, to live in another layer of abstraction?

    Read the article

  • CPU load, USB connection vs. NIC

    - by T.J. Crowder
    In general, and understanding the answer may vary by manufacturer and model (and driver, and...), in consumer-grade workstations with integrated NICs, does the NIC rely on the CPU for a lot of help (as is typically the case with a USB controller, for instance), or is it fairly intelligent and capable on its own (like, say, the typical Firewire controller)? Or is the question too general to answer? (If it matters, you can assume Linux.) Background: I'm looking at connecting a device (digital television capture) that will be delivering ~20-50 Mbit/sec of data to a somewhat under-powered workstation. I can get a USB 2 High-speed device, or a network-attached device, and am interested in avoiding impacting the CPU where possible. Obviously, if it's a 100Mbit NIC, that's roughly half its theoretical inbound bandwidth, whereas it's only roughly a tenth of the 480 Mbit/second the USB 2 "High Speed" interface. But if the latter requires a lot of CPU support and the former doesn't...

    Read the article

  • Unity throws SynchronizationLockException while debugging

    - by pjohnson
    I've found Unity to be a great resource for writing unit-testable code, and tests targeting it. Sadly, not all those unit tests work perfectly the first time (TDD notwithstanding), and sometimes it's not even immediately apparent why they're failing. So I use Visual Studio's debugger. I then see SynchronizationLockExceptions thrown by Unity calls, when I never did while running the code without debugging. I hit F5 to continue past these distractions, the line that had the exception appears to have completed normally, and I continue on to what I was trying to debug in the first place.In settings where Unity isn't used extensively, this is just one amongst a handful of annoyances in a tool (Visual Studio) that overall makes my work life much, much easier and more enjoyable. But in larger projects, it can be maddening. Finally it bugged me enough where it was worth researching it.Amongst the first and most helpful Google results was, of course, at Stack Overflow. The first couple answers were extensive but seemed a bit more involved than I could pull off at this stage in the product's lifecycle. A bit more digging showed that the Microsoft team knows about this bug but hasn't prioritized it into any released build yet. SO users jaster and alex-g proposed workarounds that relieved my pain--just go to Debug|Exceptions..., find the SynchronizationLockException, and uncheck it. As others warned, this will skip over SynchronizationLockExceptions in your code that you want to catch, but that wasn't a concern for me in this case. Thanks, guys; I've used that dialog before, but it's been so long I'd forgotten about it.Now if I could just do the same for Microsoft.CSharp.RuntimeBinder.RuntimeBinderException... Until then, F5 it is.

    Read the article

  • How to change XFCE to LXDE?

    - by kurp
    I've just installed LinuxMint XFCE, which seems to be still a bit to heavy for my hardware. Thus, I'm thinking about LXDE installation and wonder if a new Linux installation is necessary/advised in such case. The question is: how to switch easily? The additional questions are: a) is it necessary to remove the XFCE packages or both DEs may be installed at the same time? b) is there any way to simply switch between DEs (i.e. have both DEs available and boot with DE selected each time)? c) are there any performance consequences of having more than one DE installed? it means: is clean LXDE installation faster than Linux with 2 separate desktops?

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • Proxmox - Uploading disk image

    - by davids
    I've got a KVM Virtual Machine in my local PC, and I'd like to copy it to a Proxmox server. According to the docs, I just have to create a new VM on Proxmox and add the existing disk image to it, but how do I upload the image to the server? In the admin panel, if I click in MyStorage - Content - Upload, it just give me options to upload ISOs, VZDump backup files or OpenVZ templates. Would it be enough with a copy using scp? In that case, in which folder?

    Read the article

  • Setting up DKIM for multiple domains on same host

    - by modulaaron
    I have DKIM set up for one domain and it works properly. I am trying, though, to set it up for another domain name on the same machine. In short, I am sending registration and password recovery emails from one domain and everything else from the other domain. Both domains map to the same host. Setting up domainkeys in this manner was no problem - adding another "DAEMON_OPTS=" line in /etc/default/dk-filter was the solution. This is not the case for DKIM, though, since it stores this information in a configuration file (/etc/dkim-filter.conf) that is formatted in a completely different manner. Any help would be most appreciated. Thanks.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >