Search Results

Search found 16223 results on 649 pages for 'top gun'.

Page 488/649 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Windows 32-bit and 64-bit and GPT

    - by MrLane
    I know similar questions have been asked before across several sites, but the answers at least to me have been confusing and conflicting. My understanding has always been that 64-bit Windows will create and use GPT disks just fine, but will not boot from them without a UEFI BIOS. Also my understanding WAS that 32-bit Windows could not use GPT at all and so is always restricted to 2.2TB disks, which was another reason to move to 64-bit on top of the 4GB memory limit. But I have now read that this isn't correct: 32-bit Windows will create and use GPT disks just as 64-bit does. The only resriction is that you can't boot 32-bit Windows even if you DO have a UEFI BIOS? I don't think much of the literature has explained this well. There are several tools floating around for creating virtual disks or 2.2+.8GB partition schemes and such for 32-bit systems. Why when it seems you can use GPT in 32-bit Windows anyway. It also seems that people blame MS for lagging behind with respect to all of this: but it seems the issue is with BIOS manufactures not supporting UEFI rather than MS not supporting GPT... Is my new understanding now correct?

    Read the article

  • How to I make my bootcamp partition bootable again?

    - by KJFMusic
    I'm having a similar problem as everyone else in this posting. I have 5 partitions. 3 of which I created for my Mac OS Lion installation, Windows 7 installation and a 3rd for storage. Everything was running fine for quite sometime until recently. My Windows 7 installation has suddenly stopped booting. Instead of a start up screen I get: Windows failed to start. A recent hardware or software change might be the cause. File: \BOOT\BCD Status: 0xc000000d Info: An error occurred while attempting to read the boot configuration data Mac OS Lion starts up fine. I'm unable to mount my "Bootcamp" partition nor the "Storage" partition. On top of that "Storage" has been renamed to "disk0s5". When I installed Windows 7 it didn't recognize the "Storage" partition that was created in Lion so it merged what it thought was free diskspace (I'm assuming the same space that Mac OS recognized as Storage) to the Root Drive of Windows 7 (Bootcamp). Are you able to assist?

    Read the article

  • Sort order in Windows Explorer

    - by Haim H.
    The behaviour described below occurs on Windows-7 systems and on Windows XP. We operate in a dual-language environment - English and Hebrew. When in Windows Explorer we sort files by name, the order in which they are listed is not what we would expect. Here is a list of file names as sorted by Windows Explorer (all of the files have a .pdf suffix): 1G110033H-PP 19C050G-PP-ORB 19C050H-PPRM 19C100H-PPRM 19C-MBPS-PP 19C-MBPS-PP-1 29AAC050-PP 29AAC100-PP 29AAC100-PPUL 29B004064-PP 101AC050-PP 101AC100-PP 101B100-PPE 1091003G-PPFSUL 10108033G-PPSA 10125033H-PPM It looks to me that first the items are sorted according to the position of the first alphabetic character in the name, and then, within those groups, they are sorted in "normal" alpha-numeric order. That is, all the files with an alpha character in the first position are on top of the list, followed by those with the first alpha character in the second position, followed by those with the first alpha character in the third position, and so on. An alternate way of looking at this is that, in a file name composed of numbers and letters, the sort treats the first group of numbers in the name as the major sort node, with the rest of the name being the secondary sort node. Now that I understand the sequencing logic, it's not a big problem, but I was wondering why this happens?

    Read the article

  • Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace

    - by Nick
    I have an laptop top hard drive I was trying to use to my new media computer. The case is small and can accommodate for 2 2.5" drives, no 3.5" drives. I had been using the hard drive as storage hard drive until now. When I go to install Windows on the hard drive first I'm prompted at the bios of: Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace. And then again in the Windows Setup, informing me that the hard drive is bad. So I did a full format of the drive and tried again. Same error. So I took it out and hooked it back up to my other computer via an Sata usb adapter kit (maybe the cause?). The hard drive is recognized fine and when I scanned it for errors by going: right click -> properties -> tools -> error checking It returns that the hard drive is fine. I have tried 3 different SATA cables and multiple jumpers. When I plugged in my 1.5 tb 3.5" drive the computer that gives me the S.M.A.R.T. error on the 2.5" drive, recognizes it with no problems. Any ideas on why this is happening and how I can fix it?

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Tooltips shadow stuck on desktop

    - by faulty
    I tends to get this problem from time to time. The tooltips with a shadow appearing on top of everything. It's the shadow of the tooltips not disappearing after the tooltips disappear. The last one I had the tooltips was from the wifi connection list at the systray. This problem also happen to me on another computer. Both running Win7 with ATI gpu. I found this similar post Menu command stuck on screen but none of the solution helped. In fact the "Fade or slide tooltips into view" has been unchecked from the beginning. Ending task of "dwm.exe" also doesn't help. So far the only way to resolve this by restarting window. I can't post picture yet, so can't show any screenshot. Edit: Just tested a few more trick which doesn't work. 1. Turn of aero 2. Hibernate 3. Switch main display to external display and switch back. 4. Change resolution

    Read the article

  • Ruby on Rails (Redmine) on Apache - 503 Error

    - by andrewtweber
    I am running a Ruby on Rails application called Redmine. It's been working fine, but today it's giving a 503 Service Temporarily Unavailable error. (It was initially set up by an employee who is now gone.) I check the error log and it says: [Mon Nov 21 11:03:30 2011] [error] (111)Connection refused: proxy: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed [Mon Nov 21 11:03:30 2011] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1) Here's a chunk of my Apache config <VirtualHost *:80> ServerName redmine.{domain}.com RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://redminecluster%{REQUEST_URI} [P,QSA,L] </VirtualHost> <Proxy balancer://redminecluster> BalancerMember http://127.0.0.1:3000 </Proxy> I found this link: http://www.redmine.org/boards/2/topics/20561 which suggests I simply need to "start the redmine server." I've tried /etc/init.d/redmine start which gives me this output => Booting Mongrel => Rails 2.3.11 application starting on http://0.0.0.0:3000 The contents of /etc/init.d/redmine: cd /var/redmine sudo ruby script/server -d -e production One thing I immediately notice is that it says 0.0.0.0 instead of 127.0.0.1. In addition, running top or ps -ef shows no record of a "mongrel" or "redmine" process. I've also tried restarting Apache before and after starting redmine. Not sure where to go from here.

    Read the article

  • Apache Consuming Resources

    - by Chris Edwards
    Our web server suddenly has been giving us load issues. After I restart Apache the load stays low for a few hours up to a day or so then its back up to around 3.0 until I restart Apache again. Any suggestions on tracking down what is causing this? Thanks! Chris Edwards top - 20:15:05 up 19 days, 10:59, 1 user, load average: 2.11, 2.17, 2.47 Tasks: 532 total, 6 running, 525 sleeping, 0 stopped, 1 zombie Cpu(s): 11.5%us, 0.4%sy, 0.0%ni, 88.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32842656k total, 13185872k used, 19656784k free, 6143740k buffers Swap: 1048568k total, 0k used, 1048568k free, 3515252k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19089 apache 20 0 1912m 1.5g 6584 R 99.6 4.9 71:01.53 /usr/sbin/httpd 21136 apache 20 0 392m 55m 5736 R 95.0 0.2 0:03.45 /usr/sbin/httpd 21139 apache 20 0 374m 38m 5808 S 40.5 0.1 0:04.91 /usr/sbin/httpd 21124 apache 20 0 389m 51m 5948 R 38.9 0.2 0:03.15 /usr/sbin/httpd 21111 apache 20 0 371m 35m 5964 S 18.8 0.1 0:01.22 /usr/sbin/httpd 21127 apache 20 0 375m 39m 5832 S 17.8 0.1 0:01.66 /usr/sbin/httpd 21128 apache 20 0 374m 38m 5792 S 16.2 0.1 0:01.56 /usr/sbin/httpd 21110 apache 20 0 374m 38m 5848 S 15.9 0.1 0:01.02 /usr/sbin/httpd 21113 apache 20 0 374m 38m 5836 S 15.9 0.1 0:02.16 /usr/sbin/httpd 21077 apache 20 0 379m 43m 6408 S 11.0 0.1 0:07.22 /usr/sbin/httpd 21101 apache 20 0 384m 49m 6988 R 5.8 0.2 0:04.47 /usr/sbin/httpd 21112 apache 20 0 374m 38m 5956 R 2.6 0.1 0:01.61 /usr/sbin/httpd

    Read the article

  • How to deal with lots of brackets in a formula?

    - by wenlibin02
    Say, I have a formula like this (in LaTeX or Maple or other text system): Result: ((6*(k2+k3))*A123*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*(exp(-k3*(k3^2*t-x)))^2+6*A12*(-k3+k2)*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*exp(-k3*(k3^2*t-x)))*(exp(-k2*(k2^2*t-x)))^2+(-(6*(-k3+k2))*A13*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*(exp(-k3*(k3^2*t-x)))^2-(6*(k2+k3))*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*exp(-k3*(k3^2*t-x)))*exp(-k2*(k2^2*t-x)) Note: the above formula is only one part of the result of a maple calculation, I just can't break them up because there are so many many terms. Apparently, It's very hard to read. What I want to do is to fold the matched brackets level by level. If all the brackets are folded, I can find out clearly how many terms there are. Then I can analyze from the top level to the details of every term. But I just don't know how to realize that. Maybe there are some existed software which can visualize this kind of complex formula. Any idea? P.S. I use Linux system. The open source alternatives are better.

    Read the article

  • Are Colocation Cross Connects Worth While

    - by SvrGuy
    We currently operate three clusters of collocated machines in different data centers. Recently, I became aware that our newest data center will offer to cross connect us to a bandwidth provider free of charge. In the past, I never really investigated a cross connect for bandwidth because I figured that the rates would be similar to what we are paying the colo now and that it would reduce our resiliency (because we would only be using one or two carriers for IP, where as the colo uses, say 8 different providers). Then I saw an ad for hurricane electric internet services (http://he.net/cgi-bin/ip_transit_quote) that gave a price for IP transit at $1/Mbs, which is much better than the $30/Mb we pay for the blended bandwidth. What are people out there typically paying for bandwith via cross connect and how hard is to setup? Is my understanding that what you do is open agreemetns with two or three ISPs, cross connect to them and then configure your top of rack router on their network. Can you really get IP transit down to a couple of dollars per megabit per month just by doing the routing yourself? Or, is my understanding of cross connection fundamentally wrong?

    Read the article

  • Laptop GPU apparently blew up, motherboard doesn't even turn on its power LED. [But..]

    - by leladax
    EDIT: (Was: Laptop automatic shutdown after 2 seconds) If I take out the GPU, the motherboard LED turns on but then [if it attempts to power up and boot] it turns off after 2 seconds [fans turn on normally in that short period]. [Without the GPUs out there's not even an attempt to boot.] It's an SLI motherboard for a toshiba (model X200-219). If I take out one of the GPUs (they are on top of each other) it surprisingly lets the motherboard turn on too (as it is if both are out) but it still turns off after 2-3 seconds, same behavior. I wonder if it's the GPU that produces the 'turn off after being on' behavior and not something else. [Has anyone seen this behavior with blown up GPUs or could it be something else?] Previous question (before EDIT. Sorry, but someone thought it productive to lock the other one as duplicate): I'm trying to insvestigate which component produces this behavior. Other indications show it may be the GPU but I wonder if anyone knows more. It's a Toshiba Satellite X200 description: AC power shows the power being fed normally, when turned on the fan works and it appears to be starting up but after 2 seconds it shuts down with only the 'AC power connected" led on. -- seconds are about up to 4,maybe not 2 exactly.

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Advice needed for a home network setup (hardware & software) to handle many clients and potentially heavy traffic

    - by posdef
    I have recently decided to re-structure the home network of our flatshare here. Here's a quick outline of the situation. I envision to have the following 4 devices connected to the router via cable: Xbox 360 IP phone Printer QNAP server (Web, File and Multimedia) We are three people living here, so on top of that there will be to 5-6 computers/mobile devices connecting as wireless clients. My goal is to be able to transfer files (when needed) between the computer and the Multimedia server, which I can reach via 360 and play on the TV. I also would like to keep a high level of security; right now I have the encryption on WPA2 and MAC filtering. I don't believe the web server will get heavy traffic, though I would like to have it responsive. Likewise, I don't have a habit of downloading via torrent etc, but I greatly appreciate my network being responsive and fast, especially when I am browsing or streaming high quality media. Now my questions are: is this setup feasible? smart? efficient? can this be improved somehow? my current router (D-Link DI624) and the previous one (DI-524) used to have spontaneous drops in network, which I find highly irritating. I don't believe in my router, especially now that it completely crashed when I was test-running the setup by transferring a large media file to server while xbox was playing music from the server, and two computers browsing the net. Do I need to get new hardware, if so, any recommendations for a reliable and fast router?

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • Creating Windows 8.1 system image error

    - by Random
    I'm experiencing "not enough space" error when trying to create system image to a USB hard drive: Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. Blah, blah... There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backup up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and volumes included in the backup. Minimum requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space. Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte. ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. I'd tried both - PowerShell wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet and via Control Panel - File History button Clearly both EFI and Windows Recovery Environment partitions don't meet requirements coming from System Image tool (pic below) On top of that all system partitions are now shown as 100% free in Disk Management, it's disturbing but far from the actual state. My question is - hot to create System Image in Windows 8.1?

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • How do I give a user permisson to view scheduled task history on Server 2008?

    - by pplrppl
    I've set up a scheduled task on Server 2008 and want to run it as a user other than the local administrator. So I choose a domain account created specifically for this task and once I've closed the scheduled task and entered a valid password I want to run it and look a the history tab for this task. On the history tab I see: The user account does not have permission to view task history on this computer. What permission must I grant to allow this user to view history and/or how can I view the history as a local admin/domain admin instead of the user the job will run under? Steps to hopefully reproduce: I'm starting from the "Server Manager" - Configuration - Task Scheduler - Task Scheduler Library. IN the top middle pane I have tasks that have been running for several months as the local administrator. In the process of troubleshooting another issue I changed the task to run as Domain\ABCuser. Later in the process of troubleshooting I tried unchecking "run with highest privileges". I have since changed the job back to SERVERNAME\Administrator but the history tab still showed the permissions message. I may have had multiple Server Manager windows open. After Closing the Server Manager and being sure no other management consoles were open I was able to reopen the Server Manager and see the History tab without error. At this point the task works properly but should I ever need to run a task as a task specific account I'd like to know how to make the history viewable. It may be something as simple as closing all Server Manger windows to allow cached permissions to be refreshed the next time you open the Manager but at this point I don't know exactly what the solution is.

    Read the article

  • Safari's location bar (auto-suggest and web search)

    - by Lri
    Auto-suggest don't seem to work for queries with spaces. Am I missing something? If you select an item from the suggestion list that was matched by its title, the title is filled in before the address. Can you change it to work like in other browsers? SMRT disables searching by title completely. Can you combine Top Hit, History and Bookmarks into a single section? The preferences starting with DebugSafari4 don't work anymore. (Like DebugSafari4IncludeFancyURLCompletionList.) Can you direct unresolved addresses to something like google.com/search?q=?&btnI instead of ?.com? Like by changing keyword.URL in Firefox. Can you remove or hide the web search field? In Camino, Cruz and Fluid it can be resized to zero width. You can't circumvent the normal maximum ratio with InputFieldWidthRatio. AddressBarIncludesGoogle doesn't appear do anything in the current version. Are there fixes or workarounds to any of these? I'm lumping these issues together, because they are closely related — a lot of them were introduced when the location bar was redesigned in Safari 5. I'm also hoping to find something like an extension or a plugin that would replace the standard location bar.

    Read the article

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Can I disable chrome's auto translate function for visitors to a given page on my site?

    - by Drew Noakes
    I know how to disable this feature for pages that I visit, but what I'm looking for is a way to tell other user's Chrome browsers not to offer translation a particular page on my site. Is there some kind of meta tag I can use? Alternatively, can I indicate that a particular element on the page should not be translated? Reason: The controls which slide down from the top of the page cause my page to resize, which changes the content, which makes the control slide up, which resizes the page, which changes the content, which causes the controls to slide back in. Rinse and repeat. The page dances. The page itself is a map, and the content it wants to translate are all proper names and shouldn't be translated anyway. If alternate names exist in other languages, I provide them myself. Generally I'm against taking away features from the browser that users might like, but in this case it really makes sense. So please don't answer saying that I shouldn't be doing this. I've weighed up the options.

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

  • kickstart ks.cfg: Where should 'url' point?

    - by Stefan Lasiewski
    I have a kickstart file (ks.cfg) on a floppy (Old style). I am trying to install CentOS 5.4. The top of my ks.cfg says this: install # Install from local cdrom or over the network. #cdrom url --url http://kickstart.example.org/pub/centos/5.4/ On the Apache server side, this command is failing with these 404s: kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:30 -0700] "GET /pub/centos/5.4///disc1/.discinfo HTTP/1.1" 404 314 "-" "urlgrabber/3.1.0" kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:43 -0700] "GET /pub/centos/5.4/repodata/repomd.xml HTTP/1.1" 404 316 "-" "urlgrabber/3.1.0 yum/3.2.22" It seems that the value of my url doesn't match the directory structure on the server. I swear this worked a few months ago. Someone else maintains the Yum repository, and they say nothing has changed. What should the value of url URL be? Should this only include the OS (/pub/centos/5.4/), or should it include the architecture (/pub/centos/5.4/os/x86_64 )? I see that Kickstart is trying to grab a file called 'repomd.xml', but why is it looking in '/pub/centos/5.4/repodata/repomd.xml', when these files actually exist at '/pub/centos/5.4/os/x86_64/repodata/repomd.xml' and other locations at '/pub/centos/5.4/*/$ARCH/repodata/repomd.xml'? I don't see this documented or explained well in the [RedHat 5 Installation Guide1]

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >