Search Results

Search found 21853 results on 875 pages for 'point'.

Page 379/875 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • Wget - if / else download condition?

    - by Kai
    I want wget to prefer a certain filetype over another, if the files have the same basename. For example: if foo.ogg available, don't download foo.mp3 the way i use wget so far to crawl/automatically download (if anyone is interested): wget -Dfoo.com -I /folder/ -r -l 1 -nc -A.ogg,.mp3 -i http://www.foo.com/folder/ but this, of course, gets me .mp3 AND .ogg files. It often also gets me image files like .png which i didn't want in the first place, and discards them afterwards. Any Ideas? (Syntax-Explanation: -D: download only from this Domain -I: download only from this subfolder of Domain -r: recursive (follow links and directory structure) -l 1: follow only 1 link deep -nc: no clobber = download only if file doesn't exist -A: accept/download only all *.ogg and *.mp3 (discard necessary html-files) -i: download-url/starting point)

    Read the article

  • compTIA-AT EXAM

    - by SysPrep2010
    Hello everyone, I have been in the IT field only for two years. I have been dealing with servers, firewalls, routers, switches, backup servers, and desktop. For the desktop, i have been dealing with WDS (Window deployment services). Not a lot of hardware. My question is this, is it really important to have an AT cert under your belt. I dont see the point anymore. When a desktop goes down, what have been seeing, they just buy a new one. I mean I can rebuild systems they are fun, but I haven't in a while?

    Read the article

  • Group policy issues

    - by Alex Berry
    We are having an issue on one of our clients relatively new sbs installs. The domain consists of a single SBS 2011 server with 4 windows 7 clients and 3 xp clients. Most of the time everything is fine however roughly every 3 days windows 7 clients start timing out when trying to receive computer group policy. This results in hour long delays before getting to the login screen in the morning. This is accompanied by event ID 6006, win login errors stating it took 3599 seconds to process policy. Once they've booted they can log in without issue however gpupdate fails again on computer policy and gpresult comes back with access denied, even when run as domain admin... At this point if we restart the server the network is fine for 3 days. I thought perhaps it might be ipv6 or smb2, but disabling ipv6 on the clients doesn't help and the clients can browse the sysvol folder freely on smb2 anyway. Does anyone have any ideas or routes I can take to further diagnose the issue? Thanks in advance :)

    Read the article

  • What is the simplest, open-source webmail frontend available?

    - by josePhoenix
    I am working on a project to create a few extremely stripped down interfaces for common Web/Internet tasks in order to make computers accessible to my visually impaired grandmother. Currently she uses Mac OS X Mail.app, but I had the idea that I could re-skin a webmail interface running on my own server to make it easier for her to use. The ideal webmail interface to use as a starting point would be without frames or AJAX and written in Python, Perl, or PHP5+, though any setup could work as long as the template and stylesheet files were separate from the application itself. This frontend must also connect to a remote IMAP server, since her email account is with her ISP and not on my server. Can anyone recommend a bare-bones, no-nonsense webmail interface that would work for this?

    Read the article

  • Mountain Lion fails to connect to Windows share after the connection is interrupted

    - by T Reddy
    I have a Windows 7 share that my Mountain Lion Macbook Pro connects to. The windows share is simply a user account. For whatever reason, when my connection gets interrupted, the mac will show a dialog stating as such and will ask me to ignore or disconnect. From this point forward, I cannot re-establish the connection from the mac to the windows share (even if I reboot the mac). I always have to reboot the windows machine in order for my mac to see the share again. My Windows share is my media center, so I'm not always able to reboot the machine because it is recording TV. Has anybody else encountered this problem and if so how is it resolved?

    Read the article

  • Changing Servers - Redirect to new IP = No Downtime?

    - by Denis Pshenov
    I am changing servers of my website. The IP of old server cannot be moved to the new one. To have no downtime I am planing to do the following, please someone confirm it will work: Setup the new server and listen on the new IP Old server redirect all traffic to the new IP Change DNS records to point to the new IP My logic tells me that when I redirect to the new IP from my old box, the user will not see the domain name in the browser but will see the new IP. Is there a way to redirect to the new IP and send along the HOSTNAME with it so that the user will see the domain name in the browser? Im doing this because the site is in constant use and simply changing DNS settings won't do as database won't be synced between the new and old servers during propagation.

    Read the article

  • Positioning a sprite in XNA: Use ClientBounds or BackBuffer?

    - by Martin Andersson
    I'm reading a book called "Learning XNA 4.0" written by Aaron Reed. Throughout most of the chapters, whenever he calculates the position of a sprite to use in his call to SpriteBatch.Draw, he uses Window.ClientBounds.Width and Window.ClientBounds.Height. But then all of a sudden, on page 108, he uses PresentationParameters.BackBufferWidth and PresentationParameters.BackBufferHeight instead. I think I understand what the Back Buffer and the Client Bounds are and the difference between those two (or perhaps not?). But I'm mighty confused about when I should use one or the other when it comes to positioning sprites. The author uses for the most part Client Bounds both for checking whenever a moving sprite is of the screen and to find a spawn point for new sprites. However, he seems to make two exceptions from this pattern in his book. The first time is when he wants some animated sprites to "move in" and cross the screen from one side to another (page 108 as mentioned). The second and last time is when he positions a texture to work as a button in the lower right corner of a Windows Phone 7 screen (page 379). Anyone got an idea? I shall provide some context if it is of any help. Here's how he usually calls SpriteBatch.Draw (code example from where he positions a sprite in the middle of the screen [page 35]): spriteBatch.Draw(texture, new Vector2( (Window.ClientBounds.Width / 2) - (texture.Width / 2), (Window.ClientBounds.Height / 2) - (texture.Height / 2)), null, Color.White, 0, Vector2.Zero, 1, SpriteEffects.None, 0); And here is the first case of four possible in a switch statement that will set the position of soon to be spawned moving sprites, this position will later be used in the SpriteBatch.Draw call (page 108): // Randomly choose which side of the screen to place enemy, // then randomly create a position along that side of the screen // and randomly choose a speed for the enemy switch (((Game1)Game).rnd.Next(4)) { case 0: // LEFT to RIGHT position = new Vector2( -frameSize.X, ((Game1)Game).rnd.Next(0, Game.GraphicsDevice.PresentationParameters.BackBufferHeight - frameSize.Y)); speed = new Vector2(((Game1)Game).rnd.Next( enemyMinSpeed, enemyMaxSpeed), 0); break;

    Read the article

  • Kernel panic error

    - by cioby23
    We have a dedicated server with software RAID1 and one of the disk failed recently. The disk was replaced but after rebuilding the array and rebooting the server freezes with a Kernel Panic message No filesystem could mount root, tried: reiserfs ext3 ext2 cramfs msdos vfat iso9660 romfs fuseblk xfs Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,1) The filesystem on both disks is ext4. It seems the kernel can't load ext4 support. Is there any way to add ext4 support or do I need to recompile a new kernel again ? Interesting point that before disk replacement all was fine. The kernel is a stock kernel bzImage-2.6.34.6-xxxx-grs-ipv6-64 from our provider OVH Kind regards,

    Read the article

  • Virtualbox shared folder mount from fstab fails; works once bootup is complete

    - by Ben
    I've got Ubuntu 13.10 installed in Virtualbox 4.3. The host machine is Windows. I have a couple of Virtualbox shared folders being mounted by /etc/fstab. Until recently this setup worked just fine, but after upgrading from Ubuntu 13.04 and Virtualbox 4.2 (at essentially the same time) the fstab mounting stopped working. I get the following error during boot: An error occurred while mounting /home/benme/Documents. keys:Press S to skip mounting or M for manual recovery Pressing M for manual recovery and then trying to mount manually also fails: root@benme-vb:~# cd /home/benme root@benme-vb:/home/benme# mount Documents /sbin/mount.vboxsf: mounting failed with the error: No such device But if I instead skip mounting during boot, wait for Unity to start and then mount manually in a shell, everything works fine: benme-vb ~ % ls Documents benme-vb ~ % sudo mount Documents [sudo] password for benme: benme-vb ~ % ls Documents # actual file list omitted Note that when I mount manually I'm letting mount take all the options from /etc/fstab, and it works. This suggests to me that it's some sort of timing issue, where Virtualbox isn't "ready" to provide the shared file mounts at the point /etc/fstab mounts are run during bootup. Here's the fstab line, just for completeness: Documents /home/benme/Documents vboxsf uid=benme,gid=benme,dmode=774,fmode=664 0 0 Is there something I can do about this from the Ubuntu side? Or does anyone happen to know more about this from the Virtualbox angle? I've found an old report on the Virtualbox bug-tracker with identical symptoms, but in that case the user had updated Virtualbox without updating their guest additions and resolving that fixed the problem; this isn't happening here, I've definitely got the 4.3 guest additions installed.

    Read the article

  • Router's ssid changes from infrastructure to ad-hoc

    - by waldo
    For a period of time the router's ssid is shown (on various computers) as a normal infrastructure network - computers connect fine and everything works however after a few minutes / hours all computers see the same ssid as an ad-hoc network (not infrastructure). At this point a computer that was already connected continues to work - a computer that isn't cannot connect. Rebooting the router temporarily restores the visibility of the correct infrastructure ssid. Is something interfering? Connecting computers: macbook (2009), iphone 3g, windows vista desktop, windows xp desktop. Details: - D-Link DSL-2740B router set to WPA2-PSK (Personal) - Enable Wireless : Yes - Wireless Network Name (SSID) : ###### - Country : Australia - Wireless Channel : 1 - 802.11 Mode : Mixed 802.11n, 802.11g and 802.11b - Channel Width : Auto 20/40 MHz - Transmission Rate : Best (automatic) - Hide Wireless Network : No - Group Key Update Interval : 0 (seconds)

    Read the article

  • Unavailable packages repository

    - by bitmask
    I'm running ubuntu 11.10 (oneiric) on this machine, and suddenly, apt is unable to update properly. If I ask it to update its package information, by running apt-get update (or alternatively telling the update manager to "check"), it succeeds for about 120 packages (more precisely, I get about 120 Ign/Hit notes) and then says it cannot find universe Sources and restricted amd64: Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric/universe Sources 404 Not Found [IP: 141.30.13.20 80] Err http://de.archive.ubuntu.com oneiric/restricted amd64 Packages 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/source/Sources 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-amd64/Packages 404 Not Found [IP: 141.30.13.20 80] E: Some index files failed to download. They have been ignored, or old ones used instead. I manually checked the de server and cannot find anything wrong with the stuff it's complaining about. Also it looks pretty much like, say, the us mirror. But oddly enough, the IP it lists, seems to point to a debian package server, which obviously does not contain ubuntu packages. So, is this a local problem that I can fix somehow (and if so, how?) or is there actually some server down right now?

    Read the article

  • Oracle RAC interconnect in a Dell M1000e Blade Enclosure

    - by Antitribu
    We are looking at a Dell M1000e enclosure and appropriate Blades with 4 NICs each. We are planning on running Linux/Oracle 11g RAC on two blades, storage will be handled on an iSCSI SAN for which two NICs (via passthrough) will be connected leaving us with two NICs (via blade centre switches). We would like to have an interconnect (obviously) , an external IP and an internal IP. Would best practice be to: bond the remaining two interfaces and VLAN as appropriate to provide three virtual interfaces? run the interconnect on one interface and VLAN the external/internal interfaces? purchase a blade with more NICs as the above is a terrible idea? Another option? Please feel free to point out the blindingly obvious or to relevant documentation on support.oracle. I am specifically interested in supported configurations and best practices. Thanks!

    Read the article

  • How to I access "Deny" message from a Lidgren client?

    - by TJ Mott
    I'm using the Lidgren v3 network for a UDP client/server networking model. On the server end, I'm initializing a NetServer object with the NetIncomingMessage.ConnectionApproval message type enabled. So the client is able to successfully connect and the first packet it sends is a login packet, containing a username and password supplied by the user. The server is receiving that and doing some black magic to authenticate, and everything works up to that point. If the login fails, the server calling NetIncomingMessage.SenderConnection.Deny("Invalid Login Credentials"). I want to know how to properly receive this deny message on the client. I'm getting the message, it shows up with a message type of NetIncomingMessage.StatusChanged. If I call ReadString on that message, I get a corrupted version of the string I passed to the Deny method on the server. The type of corruption varies, I've seen odd characters in there but in every case it's truncated and is way shorter than the string I entered. Any ideas? The official documentation is sparse on this topic. I could use pointers from anyone who has successfully used the Lidgren library and uses the Accept or Deny methods. Also, if I don't do any authentication and just Approve() the connection every time, stuff actually works just fine and I'm getting reliable two-way UDP traffic. (And lastly, Stack Exchange said I don't have enough reputation to use the "Lidgren" tag....???)

    Read the article

  • Belking Wireless Router unable to connect to internet although wireless connection is working

    - by ptamzz
    I have a Belkin Basic N150 Wireless router. I'm trying to set up wireless connection using the wired ports my university has provided in my hostel room. Usually, when I connect my laptop using a LAN wire through the port, my settings are like IP: 10.5.130.X Subnet Mask: 255.255.254.0 Default Gateway: 10.5.130.250 DNS Server: 10.200.1.11 and I'm able to connect to the internet. Now instead of connecting my laptop directly, I've connected the lan wire to the Belkin Wireless Router, set the router as "Use as an Access Point" and in the IP field, I've put up 10.5.130.1. Now I've set the IP of my system manually to 10.5.130.3. I'm able to connect to the wi-fi but I'm still not able to connect to the internet. What am I missing?

    Read the article

  • Forum engine with full LDAP integration [closed]

    - by Andrian Nord
    We are looking for forum engine which may actually maintain user data into LDAP, maybe via mods. Core point is about ability to maintain the data, i.e. all user profile settings, like nickname, password, email, avatar, birthday and others (preferably configurable). One example of good ldap integration, level of which I'm expecting, is drupal's ldap integration, which allows to map any user's attribute into ldap and keeps it in sync with database. Year ago I've done a small research over existing Free&FOSS engines and find out few forum engines with LDAP integration, namely SFM, phpBB and something else. The most maintained solution were provided by phpBB3, which supports LDAP integration out-of-box, but it is unable to sync data with changes in LDAP server made by other software. Actually it wasn't even propagating changes back, I'm not saying about ability to map additional attributes (other than name/password/email). Also, I haven't found any forum with architecture which have proper abstraction over user settings, thus I doubt that this engines (including phpBB) are possible to mod such functionality without introducing dramatic changes into core codebase. More recent research showed that even some commercial software, like IPB is unable to keep it's database synced with LDAP directory and map additional attributes. In other words, all support I've seen so far is simple user creation upon first user's login, which is not good for us, as forum is not primary site and should not maintain it's own users base (to reduce risk of possible collisions). LDAP import is required due to many other services (ftp, email, jabber, drupal site) using same users base. Currently we have forum embedded into Drupal site, but we are unsatisfied with it's features. BTW, we are using Linux and this is not duplicate of this question, as it's author seems to be satisfied with behaviour described above. So, my question is: Are there any (preferably FOSS&free) forum engines that may import, export, keep in sync, or otherwise integrade with LDAP user database (preferably with ability to map additional fields to ldap attributes)?

    Read the article

  • Keyboard doesn't let me press certain keys at the same time

    - by kitchen
    I'm not sure how to word the problem other than I can't use certain keys at the same time. For example, when playing games that require you to use the arrow keys to move and jump/duck I am unable to move to the left and jump (left arrow + up arrow) at the same time. As a result, I don't play many games when I get to a point where the jumps and what not are too far. This happens with other keys as well. In FPS I am unable to hold W to move forward and hit 2 to select my secondary weapon. Some information that might help you: I am using Windows 7 64-bit I am using a Micro Innovations KB565BL keyboard How can I fix this?

    Read the article

  • Is it possible to render and style a <title> element from within the <head> of an html document?

    - by Brian Z
    Is it possible to render and style a <title> element from within the <head> of an html document? I thought it was impossible to render information from the <head>, but the system status page for 37signals.com seems to be doing just that - http://status.37signals.com/. If you inspect the element at the very top of the page, the text that reads "37signals System Status", you'll see that the part of the DOM that is generating the text is the <head>'s <title>, and the css is as follows: title { display: block; margin: 10px auto; max-width: 840px; width: 100%; padding: 0 20px; float: left; color: black; text-rendering: optimizelegibility; -moz-box-sizing: border-box; box-sizing: border-box; } Can someone confirm that the <title> info from the <head> is indeed what is being rendered? If so, can someone point to documentation that defines this capability as I have not found any? I have applied the above css to an html document on my local web server using the same browser (chromium, os x 10.8.5) as the 37signals site was viewed on, yet my file did not display the <head>'s <title>.

    Read the article

  • Ubuntu Froze Keyboard and mouse (laptop)

    - by fernando
    something similar to what happened to me was this post Updates kill Keyboard and mouse. unfortunately I'm stuck there. I also read on a couple other threads that I should go and use recovery mode, but when i select the option from GRUB it stops at a certain point, the screen that will allow me to fix packages won't appear. i decided to diagnose the computer, and test the RAM; so far everything seems to be going well. but this whole thing happened when I was doing an update around 230mb's... i still havent found a solution to the frozen Keyboard and mouse (trackpad). but if all else fails can i just reinstall Ubuntu? would that fix the issue? what else can I try? btw, I'm not not great with coding, so if there is anything that I need to type and put correct syntax or anything please guide me through it. I've had Ubuntu literally for 1 day, and this happens. any suggestions would be appreciated.

    Read the article

  • Best way to bring a system down with a "maintenance" message?

    - by iftrue
    What's the best way to bring down an apache2/tomcat6 setup for maintenance? Specifically, apache2 can stay running, but tomcat needs to restart to accomplish a number of tasks. My initial thought is to change the root directory in the httpd.conf VirtualHost entry to point to a new location, then issue a force-reload command to direct traffic away from the actual tomcat application. After some period of time, I perform tomcat maintenance, switch the VirtualHost entry, and force-reload to begin directing traffic back. Is there a better way to do this? I'm looking to start work on a rather extensive web application, and my deployment procedure right now involves shutting everything down and bringing everything back up. Is there a better way to do this than what I've proposed?

    Read the article

  • Real time mirroring between two sql server databases

    - by Matt Thrower
    Hi, I'm a c# programmer, not a DBA and I've had the (mis)fortune to be handed a database admin task. So please bear this in mind when answering this question. What I've been asked to do is to create a real time two-way mirror between two databases with a 10 Megabit connection between them. So when either changes it updates the other. This is not a standard data mirroring/failover task where one DB is the master and the other is a backup - both are live and each needs to instantly reflect changes made to the other. In my head this sounds like a tall order, one which may even be impossible - after all in a rapidly changing environment with lots of users this is going to be massively resource intensive and create locks and queues of jobs all over the place. Is it possible? If so, can anyone either give me some basic instructions and/or point me at some places to start my reading and research? Cheers, Matt

    Read the article

  • Cannot install SQL Server (2012) PowerPivot for SharePoint, always fails Sharepoint Version check

    - by ProfessionalAmateur
    Trying to install a fresh install of Sharepoint 2010 (w/ SP1) and SQL Server 2012 PowerPivot for Sharepoint. The prerequisites clearly show that Sharepoint 2010 SP1 is needed, which we have installed. However after when trying to install the SQL Server portion we consistently fail the rule SharePoint version requirement for PowerPivot for SharePoint' validation in theSQL Server` install process. Here is the process we are following: 1. install Sharepoint 2010 2. install Sharepoint 2010 SP1 3. install SQL Server 2012 PowerPivot for SharePoint Here is a screen shot of the error and the log file error. We are completely stuck at this point, anyone run into this before?

    Read the article

  • New partnership allows auto-transposition of client/server application to Windows Azure

    - by Webgui
    The economics of IT is changing rapidly, and organizations are searching to widen and secure availability of their systems and at the same time lower costs which is exactly what the cloud meant to do. Running your systems on Microsoft’s Windows Azure cloud for example would improve and secure the availability, accessibility and scalability (both up and down) of your systems and support the new IT economics. However, in order to take advantage of the cloud's promise of lower cost of ownership, the applications must be built or adjusted to work on that platform and in most cases this is not a simple task.  Even existing web applications cannot always be transferred to Azure without some changes, and for client/server applications, the task is way more challenging even to the point where it seems impossible. The reason is the gaps between the client/server desktop technology and the cloud's. For that reason, most of the known methodologies to migrate existing client/server applications actually involve rewrite of the desktop systems for the cloud. A unique approach is introduced by Visual WebGui which creates a virtualization layer atop ASP.Net web server, it moves the transformed or generated .Net code to that layer, and then using a patent pending protocol it renders a user interface within a plain browser. The end result is pure .NET code that is a base code for a pure rich web application and now due to a collaboration with Microsoft Windows Azure Visual WebGui provides the shortest path from client/server to the Azure cloud by being able to handle close to 95% of the transformation to the cloud platform in an automatic way. Application Migration to Azure without migraines More information about the Instant CloudMove Azure solution here.

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >