Search Results

Search found 18648 results on 746 pages for 'left join'.

Page 593/746 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • Laptop battery holds charge, but won't charge any more.

    - by Jeff
    Ok, I'm sure I will need to replace either my battery or my AC adapter, but would rather not buy one if the other is the problem. My problem is. I have a Sager laptop that gets quite a bit of use. The charging has always been a little bit odd. If I was in the process of using it, it would charge just fine and stay On AC power. If I left it alone, however(power settings to ONLY turn off the monitor) in either Ubuntu or Windows 7 it decides that it didn't want to use AC power anymore and would just start draining the battery until it died. Now, suddenly, it won't charge at all. The capacity was great up to this point which happened in an instant. It will recognize the battery but won't see the AC power if plugged in while the battery is in. I can power up the laptop without the battery and it works fine. If I plug in the battery while powered up it will claim it's charging it, but it stays at the same percentage. If I unplug the power, it will switch over to Battery fine, but I have to power down and unplug the battery to get it back on AC power. I've had dying/dead batteries before but they typically won't hold a full charge anymore but it still winds up to 100% then drops quickly when unplugged. This seems more like a chip problem in the battery to me, but I'm not sure. Any ideas?

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • Boot sequence unlike reboot

    - by samgoody
    When I turn on the computer it acts very differently than when I reboot it. [WinXP Pro, Intel Core2 6600, 2.4GHZ, 2GB RAM, NVIDA GeForce] Boot: Monitor must be plugged into the motherboard or no image. Screen resolution 800x600. Changes to the resolution cause only the top half of the screen to be usable, and are lost when I shut down the computer. Desktop icons arranged in neat rows on left of desktop. Nothing of note in system tray In Device Manger - Display adapter: Intel(R) Q965/Q963 Express Chipset Family In Device Manger - Monitors, two monitors are listed Hibernate and standby work. Reboot: Monitor must be plugged into the graphics card or no image. Screen resolution - 1280x1024 Desktop icons arranged in the cute circle that I put them in. NVIDIA icon shows in system tray. In Device Manger - Display adapter: NVIDA GeForce 6200LE In Device Manger - Monitors, one monitor is listed Hibernate and standby do not work. When awakened after a hibernation it says: The system could not be restarted from its previous location because the restoration image is corrupt. Delete restoration data & proceed to system boot? Double reboot (inconsistent): Monitor must be plugged into the graphics card. Screen resolution - 1024x768 Odd icon shows in system tray whose tooltip says "Intel Graphics" For a while my morning ritual was to boot, wait, reboot using (alt+ctrl+del - ctrl+u - R), wait. Keeping the monitor plugged into the graphics card. But aside for the inefficiency of this method, I sometimes want to standby and can't. On the other hand, the computer is unusable when set to 800x600. Please help, anyone?

    Read the article

  • Dual pane file manager for Mac OS X

    - by Alex Kaushovik
    Is there a good customizable dual-pane file manager for Mac like Total Commander / Far Manager in Windows, or like Krusader / Midnight Commander in Linux? I used to work on Windows for quite a while and mostly used Far Manager and sometimes Total Commander, then I switched to Ubuntu Linux and used Krusader, now I switched to Mac OS (Snow Leopard) and I'm having a hard time trying to find a good file manager... Many of the existing applications are trying to replace the Finder with "multimedia capabilities nobody cares about in file manager - IMHO" (Path Finder, ForkLift), some of them are almost good dual-pane file managers (couldn't remember examples), but none of them worked for me mostly because of one reason: I couldn't integrate my file/folder comparison utility (Araxis Merge for Mac) with them... The way it worked for me in Windows and Linux is that I was setting the cursor on one file in the left pane, then setting the right-pane cursor on another file in right pane, then I pressed a hotkey that launched Araxis Merge with those to files/folders comparison results. It was very easy to set up in Far Manager (Windows) and Krusader (Linux, actually in Linux I used "Meld" instead of Araxis Merge...) The tool I'm looking for doesn't necessarily has to be free... Thank you!

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • Configuring ZenOSS to monitor CPU load

    - by Tom
    I'm trying test out monitoring tools for a network at work with a coworker but neither of us have ever used an sort of monitoring tools before. Currently we are experimenting with ZenOSS and having some difficulties. We want to populate our CPU load graphs because that is one of the primary feature we are looking for in our monitoring tools but we have been unable to populate the graphs with data. So far we have installed the wmipreformance, sqldatasource, wmidatasource, snmpperformance(simple) zenpacks and the machine we are trying to monitor is running Windows XP. We have tried to model the device and everything seems to run and we've tride to add data points to graphs but the only options we recieve for graphs are CPU and Memory. We are able to monitor services, ZenOSS recognizes the make and model of the processor, RAM, and Harddrive and is even giving us metrics on available storage but again, we are looking for performance metrics such as CPU load and Memory utilization. I realize I probably didn't provide a lot of information but that is because we don't have a very good idea of what we are doing and can't find instruction either on the ZenOSS homepage or forums to monitor CPU load. If someone could give us step by step instruction on how to set up CPU load monitoring that would probably be more beneficial to us than a diagnostic of our current setup, but regardless, if I left any important information out and you need it to answer the question, please let me know. Thank you.

    Read the article

  • Reconnecting OST with Exchange

    - by syrenity
    Hi. I have a quite big problem with customer's MS Exchange. The server got it's disk filled about 2 weeks ago, so it's currently offline. They plan to upgrade it, but not in hurry, as they use it mainly for OWA and back-up - the mails exchange is done via SMTP and POP3. Trying to diagnose some problem today, one of the users has (following the ISP instructions), removed the Exchange account from Outlook, which essentially left the OST orphaned. The user naturally didn't move the emails or any other data to the Archive / PST before, so these emails located on the OST only. So currently I'm trying to figure out how to restore them. There are 2 options: 1) Make the user buy some tool to convert them to PST, and import as archive / main Outlok file? 2) Reconnect the Outlook to Exchange (once it up), let it sync the old server content, then shutdown Outlook and replace the new OST with the old one, start Outlook again in offline mode and move these files to archive. 3) Any other method? Can someone advice what would be the best approach here? The used versions are Outlook 2007 and Exchange 2003. Thanks!

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Amazon EC2 instance missing Network Interface

    - by Sergiks
    I am running Linux on a t1.micro instance at Amazon EC2. Once I noticed bruteforce ssh login attemtps from a certain IP, after litle Googling I issued the two following commands (other ip): iptables -A INPUT -s 202.54.20.22 -j DROP iptables -A OUTPUT -d 202.54.20.22 -j DROP Either this, or maybe some other actions like yum upgrade perhaps, caused the follwing fiasco: after rebooting the server, it came up without the Network Interface! I only can connect to it through AWS Management Console JAVA ssh client - via local 10.x.x.x address. Console's Attach Network Interface as well as Detach.. are greyed out for this instance. Network Interfaces item at the left does not offer any Subnets to choose from, to create a new N.I. Please advice, how can I recreate a Network Interface for the instance? Upd. The instance is not accessible from outside: cannot be pinged, SSH'ed or connected by HTTP on port 80. Here's the ifconfig output: eth0 Link encap:Ethernet HWaddr 12:31:39:0A:5E:06 inet addr:10.211.93.240 Bcast:10.211.93.255 Mask:255.255.255.0 inet6 addr: fe80::1031:39ff:fe0a:5e06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1426 errors:0 dropped:0 overruns:0 frame:0 TX packets:1371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:152085 (148.5 KiB) TX bytes:208852 (203.9 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) What is also unusual: a new micro instance I created from scratch, with no relation to the troubled one, was not pingable too.

    Read the article

  • Avoid "privacy pitfalls" in Windows and Linux?

    - by Somebody still uses you MS-DOS
    I have a Windows and a Linux machine. In Windows, everytime I visit a site, a lot of cache/history files are created on my machine. I setup my Firefox to don't save anything. ...but Windows saves a lot of "temp" files, some strange files I opened in registry (like video names). Each video I open in VLC is shown in "Last shown videos". In windows, all files opened can be found at "Recent opened files" as well. A lot of these privacy configurations can be tweaked (VLC and "Recent opened files" in Windows) - it's a PITA doing it individually, but it's possible - but there isn't a guide to these "internal" privacy traces that are left on Windows installation. In Linux, I just know there are these problems in app level (like VLC). My question is: is there a complete guide to avoid undesirable traces of what I did/watch/used in my Windows machine? (Delete everytime the PC is restarted, or even avoiding recording these info at all) Is there a website with configuration guides to different types of software? I would like to know about Linux privacy pitfalls as well.

    Read the article

  • Repairing hard disk when Windows installation disk won't boot

    - by Echows
    I'm trying to recover some data from a faulty hard disk with Windows installed on it (on which Windows won't even boot). I have tried so far: Booting to Ubuntu live USB stick and running ntfsfix (didn't work) Trying to mount the broken partition when running Ubuntu from usb stick (doesn't mount) Running photorec image recovery tool from live Ubuntu (it found some stuff but not the images I was looking for) Now as a last resort I got myself a Windows installation on a USB stick so that I can try fdisk, but the installer doesn't work. The loading screen shows up and then the installer crashes. The installer works fine on other computers. I suspect that the installer is trying to read the hard drive to see if there's something there but when it can't read one partition, it crashes. On Ubuntu, I can mount other partitions except the one I'm interested in so at least the hard drive is not completely dead. So the question is, what options do I have left? To be more specific, my goal is to recover some images from the faulty ntfs-partition on the hard drive. Other than that, I don't care about the contents of the hard disk.

    Read the article

  • How to use AND/OR Building Block content in a Word 2007 template

    - by JimmyJames
    I am creating a Schedule of Work template and am successfully using Developer Tab and Quick Parts to allow user to choose content on an "either/or" basis: either A; OR B; OR C; etc., essentially choose one option from many. One Building Block control, one paragraph, nice and clean. Now what I need to do but cannot seem to figure out, is how to allow user to choose content on an "and/or" basis: either A AND B; A OR B AND C; B AND D AND E OR F; etc., essentially choose several options from many on a variable basis. One Building Block control, maybe one paragraph, maybe three or more paragraphs. Not so clean. I thought of building choice options for all possible paragraph combinations, but I can have as many as 7 or 8 different paragraphs, and that solution quickly becomes unworkable. Multiple controls--some of which will be left unused doesn't work either, since I cannot find an easy way to have a "Choose or Delete" control that actually deletes if "Delete" is chosen. Recommendations are most welcome.

    Read the article

  • Is it possible to have a conditional formatting cell "visually cycle" through all the formats that evaluated true?

    - by Ben
    Like the title says, "In Excel, when a cell has multiple conditional formatting rules that evaluate true, is it possible to have the cell "visually cycle" through all the formats that evaluated true? If not, suggestions on what to do would be appreciated!" I'm creating an employee schedule for a business that has multiple job areas that need to have an employee assigned to cover. The schedule is currently set up with the date on the top row, employee list down the left column, and the employee's assigned "job area" cross-referencing with the date on the top row. Originally it was set up where if every required "job area" didn't have someone assigned to it, the date would (via conditional formatting) change to red. I've set it up now that if a condition isn't met, the date will change to the color of the "job area" that doesn't have an employee assigned to it. However, there are cases where multiple job areas don't have an employee assigned, but the date will only change color based on the first condition that isn't met. It'd be nice if there was some way for the date cell to cycle through the different colors that correspond to the job areas where no one is assigned. I have a hunch that's not possible though. If it is possible, I'd love to know how to do it. And if it isn't, if anyone has any suggestions on how I can modify the Excel sheet to make it easier to identify the job areas that don't have anyone assigned to them, I would appreciate it. FYI This schedule goes out months in advance.

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • Switching between taskbar tasks sequentially

    - by Doug Kavendek
    In Firefox and some other tabbed interfaces, I can use Ctrl-PgUp/Down to cycle through tabs sequentially, based on their order in the tab list. I associate what is what based almost entirely on its position along the tab bar and what is next to what, so this is extremely useful for the way I keep track of things in my head. However, I haven't been able to find an equivalent for actual taskbar items in Windows, such that with a single shortcut will switch focus to the task item either to the left or the right of the current task in focus. There's a lot of existing shortcuts that I use (Alt-Tab, Ctrl-Esc, and their counterparts), but these use the window manager's stacking order, so it always changes based on what you've switched between in the past, and so their usefulness generally only lies in switching between two apps -- above that, I just can't keep that kind of stack in my head. The closest shortcut I've found is Winlogo-Tab, but it only moves a selector on the taskbar, so you have to hit space after moving it, and it also seems to originate the selector from the leftmost item every time (rather than relative to the current item). Am I just a weirdo and will have to write my own app to perform these actions?

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • RTorrent stops my torrents, crashes, and I have to manually re-add torrents and start them. How can I stop this cycle of doom?

    - by meder
    I cannot use transmission which is the best torrent client because it's banned from one of the trackers I use, so I am forced to use rtorrent. Normally I am all for command-line programs, however rtorrent ( 0.8.6/0.12.6 ) is simply frustrating. It is not intuitive, imo. I have 400 MB left on the HD and that's more than enough to dl this 200 MB avi. Rtorrent stops the download, though. It says [CLOSED] near the torrent. I do ctrl-r and that invokes the local hash check, and after that's done rtorrent simply dies ( wtf? ). Afterwards, it gives me rtorrent: TrackerManager::send_later() m_control->set() == DownloadInfo::STOPPED. So that leads me to open rtorrent again, then hit ENTER and /home/meder/file.avi.torrent, down arrow, and ctrl-S. I am looking for multiple things... How can I tell rtorrent to not worry about disk space? Again, it stops the torrent if my HD only has 400 mb when the torrent I'm dling is 200 mb ( there are no other torrents ). Why does ctrl-R fail hard? Why does it cause rtorrent to crash? If #2 is not solvable, can someone provide an easy way to add a torrent and start it, a more efficient method than typing the torrent name, hitting the down arrow, and ctrl-S?

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Use of backreferences in fail2ban filters possible?

    - by Izzy
    From time to time, I see collections of suspect "File not found" errors in my Apache logs, basically using the pattern File does not exist: /var/www/file, referer: http://my.server.com/file In human terms: The file was not found, though it referenced here itself. A clear hacking attempt, as that's hardly possible (and the REQUEST_URIs often enough suggest the same). In my eyes a clear case for fail2ban – if I could get backreferences to work here: failregex = ^%(_apache_error_client)s File does not exist: /var/www(.+), referer: http://.+\1$ (Justin Case: above examples assume the DIRECTORY_ROOT of that webserver being /var/www) I googled for hours, searched the fail2ban wiki up and down – but nowhere I could find a statement concerning backreferences in its filters. Are they not supported, or did I do it the wrong way? Any hints how to make it work (except from "dirty hacks" like first sending the request to another fake url using mod-rewrite, and then catching on that (if anyone is interested, I can elaborate on that approach in an answer), or doing something similar using mod-security)? as an entire log line was requested: [Fri Nov 08 14:57:28 2013] [error] [client 50.67.234.213] File does not exist: /var/www/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found;, referer: http://www.myserver.com/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found; (sorry, logs were just switched, so this long candidate was the only one left currently; minor adjustments were made for privacy reasons)

    Read the article

  • iSCSI performance questions

    - by RyanLambert
    Hi everyone, apologies for the long-winded post in advance... Attempting to troubleshoot some iSCSI sluggishness on a brand new vSphere deployment (still in test). Layout is as such: 3 VSphere hosts, each with 2x 10GB NICs plugged into a pair of Nexus 5020s with a 10gig back-to-back between them. NICs are port-channeled in an active/active redundant fashion (using vPC-mac pinning for those of you familiar with N1KV) Both NICs carry service console, vmotion, iSCSI, and guest traffic. iSCSI is on a single subnet/single VLAN that is not routed through our IP network (strictly layer2) Had this been a 1gig deployment, we probably would have split the iSCSI traffic off onto separate NICs, but the price/port gets rather ridiculous when you start throwing 4+ NICs to a server in a 10gigabit infrastructure, and I'm not really convinced it's necessary. Open to dialogue/tech facts re: this, though. At this point even a single VM guest will boot slowly to iSCSI storage (EMC CX4 on the same Nexus 5020 10gig switches), and restores of VMs from iSCSI take about twice as long as we'd expect them to. Our server folks mentioned that if we split the iSCSI off onto its own NIC, performance seems significantly better. From a network perspective, I've run through the variables I can think of (port configuration errors, MTU problems, congestion etc.) and I'm coming up dry. There really is no other traffic on these hosts other than the very specific test being performed at the time. Important thing to note is that guest traffic works just fine... it seems storage is the only thing affected by whatever gremlin exists. Concluding that we're not 'overutilizing' the network infrastructure since we're doing hardly anything, I'm just looking for some helpful tips/ideas we can use to resolve this... preferably without hurling extra 10gig NICs that are going to sit around 10% utilization while we've got 70+% left on our others.

    Read the article

  • why does my computer crash?

    - by chobo2
    Hi my computer keeps crashing and I don't know why. At first I thought because I had my cpu over clocked that it all of sudden was crashing. So I set my cpu back to regular speed. This did not help. I then thought it was because 2 sticks of my memory where from a computer that suffered from a power surge. However I just ran the windows memory diagnostic tool( extended) and after like 6 hours of testing my memory it found no errors. So now the only thing that is left is windows 7 64bit. I first over clocked my cpu for a couple months while running XP. Never had a problem. I installed the memory and windows 7 at the same time. But I not sure if it is my memory now since it passed the diagnostic tests. However I am not sure if it is windows 7 either has I installed it twice in the last year. I really don't want to go back to XP to find out if this is the case. So here are my blue screens of deaths(from bluescreen). https://sites.google.com/site/myerrorswin7/errors (I hope you enjoy my great site lol) As you can see most of them are different NTFS_FILE_SYSTEM KMODE_EXCEPTION_NOT_HANDLED BAD_POOL_HEADER IRQL_NOT_LESS_OR_EQUAL SYSTEM_THREAD_EXCEPTION_NOT_HANDLED SYSTEM_SERVICE_EXCEPTION

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >