Search Results

Search found 7872 results on 315 pages for 'unsupported hardware'.

Page 297/315 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Alienware m15x (older model) BSOD investigation

    - by Crishu
    A frined of mine asked me to help him with an Alienware m15x laptop that had a little service history. It was bought in june 2008, serviced in january 2009 for a random fps drop problem, Alienware returned it saying nothing was wrong. The laptop still had hiccups, but after juggling a few drivers and settings, the fps drops weren't as noticeable. Eventually it died in Sept. 2009. It would not boot up locking itself on a white/gray screen. (i think it was overheating .. clocking in 100 degrees Celsius). So back to Alienware it went. They replaced the GPU and all was fine. Up until these blue screens started showing up. One other thing that was updated was the HDD and a Windows 7 reinstall, in August. From then on it seems to have started its BSOD. Could this be the culprit? Why? 0_o The original Windows was Vista but it was upgraded with a digital download/purchase of Windows 7 Home Premium and activated after installing windows. No errors on the old HDD, just on the latest installation. LE:Due note that now the old HDD is used to see if issues re-occur. So please, I am in need of someone who can interpret these windows dump files: Minidump I may have come to some conflicting conclusions. So if someone can clarify each dump/date and the probable cause/error it had; and a final conclusion or solution, we would be very grateful. Also please consult report for other system info I omitted: same link,code: XRWIVLWG If I missed something or if you have any other questions I'll be happy to answer them. Thank you. Good day. Processor: Intel(R) Core(TM)2 Duo CPU T9300 @ 2.50GHz Network Adapter Properties: Broadcom NetLink (TM) Gigabit Ethernet Intel(R) Wireless WiFi Link 4965AGN Video Adapter Properties: Driver Description NVIDIA GeForce 8800M GTX Driver Date 19.08.2009 Driver Version 8.16.11.8681 Driver Provider NVIDIA INF File oem19.inf Hardware ID PCI\VEN_10DE&DEV_060C&SUBSYS_0770152D&REV_A2 Location Information @system32\DRIVERS\pci.sys,#65536;PCI bus %1, device %2, function %3;(1,0,0) PCI Device NVIDIA GeForce 8800M GTX [NoDB] BIOS String Version 62.92.34.0.8 Installed Drivers nvd3dum (8.16.11.8681), nvwgf2um, nvwgf2um Hard Dik Drive: Model ID ST9120823ASG (**older one 120gb**) Model ID WD32000BEKT (new 320gb with fresh OS)

    Read the article

  • What are the most likely bottlenecks determining the performance of CamStudio screen recording?

    - by Steve314
    When doing screen recording, I can get a frame rate of maybe 15 frames per second for the full screen on my 1080p monitor using the XVID codec. I can increase the speed a bit by recording a region, changing screen modes, and tweaking other settings, but I'm curious what hardware upgrades might give me the biggest bang for my buck. My PC is budget, but modern... Athlon 2 X4 645 (3.1GHz, quad core, limited cache) processor. 4GB single channel DDR3 1066 RAM. ASRock motherboard with NVidia GeForce 7025/nForce 630a Chipset. ATI Radeon HD 5450 graphics card - 512MB on board, not configured to steal system RAM. I dual-boot Windows XP and Windows 7. For the moment, XP is my bigger performance concern as it's still my getting-things-done O/S as opposed to my browser-host O/S. My goal is to make a few programming-related tutorials. For a lot of that I don't need screen recording - I can make up some slides, record audio with the PC switched off, yada yada. When I do need screen recording, I'll mostly be recording Notepad++, Visual Studio or a command prompt. Occasionally, I may be recording some kind of graphics or diagram program and using my pre-Bamboo cheap Wacom tablet - I have the CS2 versions of Photoshop and Illustrator, but I'd much more likely be using Microsoft Paint. Basically, what I'll be recording won't be making huge demands on the machine - but recording a fair number of pixels (720p preferred) will be useful. What's particularly wierd - not so long ago I still had a five-year-old Pentium 4 based PC. And (with the same 1080p monitor) it could record at not far from the same frame rate. So clearly the performance issues are more subtle than just throw-money-at-it. My first guess would be that the main bottleneck is the bandwidth for transferring data to/from the graphics card. Is that likely to be correct? In support of that, see this [Radeon HD 5450 review][1] - the memory bandwidth is only 12.8 GB/s. If you can't get data out of graphics memory quickly, you can't transfer it back to the system memory quickly. Apparently, that's slower than some top-end cards in 2002.

    Read the article

  • Model M Keyboard inputs incorrect characters after logging in to Fedora

    - by mickburkejnr
    I recently bought a 24 year old IBM Model M keyboard. From what I gather, it'd been left on a shelf for the last 5 years, so you can imagine the amount of dust dirt and crap that was on it. Before cleaning it, I plugged it in to my laptop (running Fedora 17) using a PS/2 to USB adapter. What I found was, while it still works, the keys I press don't correspond to what is displayed on the screen. So for example, when I type S on the keyboard, I get ß display on the screen instead. At the time, I put this down to the adapter not working properly. Since then, I stripped the keys off the keyboard and cleaned the whole thing. It looks like it's just come out of a box! I then plugged it in to my computer (also running Fedora 17) via a standard PS/2 plug. The computer loaded up to the login screen, and I typed in my password. Pressed enter, and I logged straight in to my machine. At this point, I opened up a text editor and started typing some stuff. To my horror, the keystrokes I was entering weren't coming up as intended. What came up instead were characters that would map to the pressed key but only under a different keyboard language setting. I opened up a program to see what keyboard language had been selected, and the correct one for the keyboard was selected (which is UK in my case). I opened up a window that would show what characters mapped to what keys, and I pressed every single key on the keyboard, and every corresponding block representing each key lit up. I went back to the text editor to try again, but I was still getting these random characters. Whats more is that the backspace key would not work, although in the other utility it would flash when pressed. What I know is that at the login screen the keyboard must have entered the correct characters, otherwise I wouldn't have been able to log in. Further more, keys that don't respond while using a text editor as sending signals to the computer, as illustrated in that keyboard utility. The question is why random characters are displayed when they really shouldn't be? Would this be a hardware fault or a software issue?

    Read the article

  • SMBfs mounting OK, listing OK, Read KO, smbclient OK

    - by Kwaio
    I've tried to make the title the most meaningfull I could but it still looks ugly. The premises. We are using RHEL3-U8 as OS on most servers here, don't ask me why or suggest to upgrade, it's not on today's schedule. That means kernel used is 2.4.21 I have no access to the remote server, but I know it is a netApp NAS rack. $> smbclient --version Version 3.0.9-1.3E.9 Here is the /etc/fstab line : //NASHOSTNAME/share /mnt/mydir smbfs ro,uid=123,gid=123,workgroup=XXXX,credentials=/somefile 0 0 Here is the following mount output line //NASHOSTNAME/share on /mnt/mydir type smbfs (0) The symptoms. I can list the share without problems, even cd in there. The issue appears if I try to read any file : $> cat /mnt/mydir/fileX.txt cat: /mnt/mydir/fileX.txt: Input/output error In the system logs (/var/log/kernel for example) the following errors appear. Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_readpage_sync: fileX.txt open failed, error=-5 The ERRHRD code 0x001F error is "General hardware failure" although it seems samba sometimes uses it for a different purpose, see http://www.ubiqx.org/cifs/SMB.html [Strange behaviour Alert] Additionnal informations : There is another SMB mountpoint on the system pointing to a (linux) host using samba and this one works. What I have tried. I have tried adding debug=4 to the mounting options and remounting the share and the logs still look the same. I have tried to mount the share with smbclient and I am able to fetch files with the get command. Both targets are in the same subnet, so network problem should be out, even if the LAN goes through a VPN with optimizers, MTU has already been decreased to 1450. I can also mount the share through NFS but then the files are all root.root 700 and I need to read them with another user...

    Read the article

  • Can not get sound over hdmi in kubuntu 9.10

    - by user32509
    I have used a hdmi cable to connect my lcd (which is connected with my speakers) with my nvida 275 gtx grafic card. I can not get the sound output to work. The hardware itself is working probably - I tested it under windows. Currently I am running Kubuntu 9.10 64 with Nvidia 190.53. The sound output worked fine before I installed the hdmi connection. (German output - i can change it, if you tell me how :)) aplay -l **** Liste von PLAYBACK Geräten **** Karte 0: Intel [HDA Intel], Gerät 0: ALC889A Analog [ALC889A Analog] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 Karte 0: Intel [HDA Intel], Gerät 1: ALC889A Digital [ALC889A Digital] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 aplay -L front:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog Front speakers surround40:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Intel,DEV=0 HDA Intel, ALC889A Digital IEC958 (S/PDIF) Digital Audio Output null Discard all samples (playback) or generate zero samples (capture) pulse Playback/recording through the PulseAudio sound server And i disabled mute in kmix an all channels :) Edit: lspci -v ... 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) Subsystem: Giga-byte Technology Device a022 Flags: bus master, fast devsel, latency 0, IRQ 22 Memory at ea400000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel <?> Capabilities: [130] Root Complex Link <?> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel ... cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.20. lsmod | grep snd_hda_intel snd_hda_intel 31880 2 snd_hda_codec 87584 2 snd_hda_codec_realtek,snd_hda_intel snd_pcm 93160 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss snd 77096 16 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device snd_page_alloc 10928 2 snd_hda_intel,snd_pcm I think I am missing the something-hdmi module? Is there such a thing?

    Read the article

  • ZFS Recover from Faulted Pool State

    - by nickv2002
    I have a six disk ZFS raidz1 pool and had a recent failure requiring a disk replacement. No problem normally, but this time my server hardware died before I could do the replacement (but after and unrelated to the drive failure as far as I can tell). I was able to get another machine from a friend to rebuild the system, but in the process of moving my drives over I had to swap their cables around a bunch until I got the right configuration where the remaining 5 good disks were seen as online. This process seems to have generated some checksum errors for the pool/raidz. I have the 5 remaining drives set up now and a good drive installed and ready to take the place of the drive that died. However, since my pool state is FAULTED I'm unable to do the replacement. root@zfs:~# zpool replace tank 1298243857915644462 /dev/sdb cannot open 'tank': pool is unavailable Is there any way to recover from this error? I would think that having 5 of the 6 drives online would be enough to rebuild the right data, but that doesn't seem to be enough now. Here's the status log of my pool: root@zfs:~# zpool status tank pool: tank state: FAULTED status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM tank FAULTED 0 0 1 corrupted data raidz1-0 ONLINE 0 0 8 sdd ONLINE 0 0 0 sdf ONLINE 0 0 0 sdh ONLINE 0 0 0 1298243857915644462 UNAVAIL 0 0 0 was /dev/sdb1 sde ONLINE 0 0 0 sdg ONLINE 0 0 0 Update (10/31): I tried to export and re-import the array a few times over the past week and wasn't successful. First I tried: zpool import -f -R /tank -N -o readonly=on -F tank That produced this error immediately: cannot import 'tank': I/O error Destroy and re-create the pool from a backup source. I added the '-X' option to the above command to try to make it check the transaction log. I let that run for about 48 hours before giving up because it had completely locked up my machine (I was unable to log in locally or via the network). Now I'm trying a simple zpool import tank command and that seems to run for a while with no output. I'll leave it running overnight to see if it outputs anything.

    Read the article

  • Holding off Windows 2000/3 Server in Shutdown

    - by user1668993
    We have a C# VS2010 application running on a Windows 2000 Server box (there is also a Windows 2003 Server box) as pretty much the only application running. We remove power from the box. There is a short duration battery (maybe 3 minutes of power) which then waits 10 seconds and then decides things are coming down and notifies Windows that it needs to shut down. Windows sends a CTRL_SHUTDOWN_EVENT event to the application which fields it and tries to keep Windows from going down for a while to let another computer which communicates with this one time to do some file work on the first computer. It does this by a timing loop and after the loop is over, it exits gracefully and the computer shuts down. Nice plan but it doesn't work. The application gets to maybe 20 seconds and the application is forcibly killed by Windows and Windows shuts down. At 90 seconds, the hardware firmware running the battery turns off power to the computer. I have tried searching to find out how to hold off Windows for a bit of time. I tried creating (it wasn't there) the HKEY_LOCAL_MACHINE subtree: \SYSTEM\CurrentControlSet\Control\WaitToKillAppTimeout registry key to 60000 but though it seemed to keep the popup from happening, Windows itself died at about the same amount of time -- we think without having the opportunity to shut itself down gracefully. Maybe the registry key worked but wasn't enough. Basically I have an "ill-mannered" application which is refusing to shut down (for the best of reasons) and without the registry key thing, Windows eventually shuts it down anyway and then shuts itself down. With the registry change, we think what is happening is that Windows doesn't shut down the application but Windows itself is killed suddenly without shutting down but power is still not pulled for about another minute, and then power is pulled. So maybe we have layers here. First there is how long the application tries to stay open. Then there is how long Windows is prepared to allow it to stay open. Then there is ... something... which kills windows. Then there is the power loss. Anyone have any ideas how we can get windows to stay open and in operation say to 70 seconds instead of about 20? Is our registry key right, but not enough? Is there some additional key we need to set to determine how long after windows is notified of a shutdown before it just kills itself? Thanks in advance.

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Laptops on Windows Domain sometimes have problems accessing internet when off-site

    - by FSUScoot
    Hi all-- We've had this problem for a long time. When users travel, sometimes they can't get internet access from a wired or wireless connection. Here are a couple examples: 1) A user goes to a hotel and tries to access the wireless in their room. They can connect to the access point. They open a web browser and they can't get re-directed to the hotel's login page. Because they can't log in, there's no internet access. 2) A user goes to another laboratory/university and tries to access the wired network. They connect, link is fine, PC gets IP from DHCP but no internet access. There's no login page to be re-directed to. It should just "work". What I've found is that it's a DNS issue. Because the computer is on a Windows Domain, it seems it MUST use our DNS servers. Even if you connect to an outside network and do an ipconfig /all, it looks like everything is ok. It'll even show their DNS servers listed in the config. The computer just won't use the other network's DNS server. I found a reg key that keeps our DNS servers listed and it seems that they take priority every time: HKLM\SOFTWARE\Policies\Microsoft\Windows NT\DNSClient All the values under that key are for our AD domain. NameServer and Searchlist never change. What I've found is if the user edits the NameServer string and puts the DNS server of the network they're on, everything works just fine. They get re-directed to the hotel's correct login page or their internet access starts working. It's only a problem if the network they're on blocks outside DNS or a hotel that uses an internal name in their front page redirection that only their DNS server knows about, i.e., not public. If the re-direct page starts with an IP, like 10.10.10.10, it'll work just fine. Obviously this isn't a fix for everyone. Most of my users are pretty knowledgeable so it’s easy for me to walk them through or send them a .reg file that they can edit and run. This problem isn't limited to Windows 7. It was like this with XP as well. It's not hardware related. The problem exists on both wired and wireless, Intel or Broadcom, laptops or desktops. Anyone else have this problem? Is there a GPO I can change that I missed? Got a good work-around for this? Thanks for any help!

    Read the article

  • If Nvidia Shield can stream a game via wifi, why can I not do the same via ethernet to any other PC?

    - by Enigma
    I think it absurd that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. I can only assume that there must be some reason behind this or a limitation preventing this, but what? 150mbps wifi is in no way superior to a 1000mbps LAN connection aside from well wireless mobility. Not only that but I have a secondary laptop and desktop which should by hardware comparison completely outperform anything the Tegra in the Nvidia Shield can do. Is this all just a marketing scheme to force people to buy the shield for the streaming benefit? Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable for me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? (*) - perhaps this isn't the first but afaik it is the first complete package.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Mouse Error Code 24. Windows 7

    - by Cj.
    I've had the same mouse for a while, and it's been working fine until one day, it started giving me a message about a device not working properly. I tried updating the drivers, and re-installing, I even deleted old drivers in case my computer should be a little confused. It never made a difference, and my mouse seemed to be working just fine despite getting the permanent error in my device manager, I looked it up several times online, but I never found anything I could actually use, when I go to official websites, I always get the same response "plug in so so into a different place - drivers - install silverlight before you can watch this tutorial, try it on a different machine". so I gave up on that. But now is where I have a real problem, lately, my little strange error evolved into a fullblown Error 24, and my mouse is starting to turn on and off randomely, especially when it is being used, but I do hear it go "badum..dadum" when I'm off doing something else. when I looked up error code 24, I really didn't find much other than it meaning: Code 24 This device is not present, is not working properly, or does not have all its drivers installed. (Code 24) Cause The device is installed incorrectly. The problem could be a hardware failure, or a new driver might be needed. Devices stay in this state if they have been prepared for removal. After you remove the device, this error disappears. But, I have tried uninstalling the device entirely several times, and it'll go right back to its previous state with error 24, and turning on and off randomely. what do I do? I cannot afford taking it to a repair place, I can't really afford a new mouse either, I refuse to buy cheap ones as I am a gamer, in need of more than 3 buttons, and a good grip is important. Could there possibly be some confusion in the registry? I do remember having gotten some early problems after I converted my vista to windows7. But I hardly dare going in there unless I'm 100% certain of what I'm going for, and I can honestly say I am at a loss here. Edit: it is a USB mouse we're talking about here. MX™518 Optical Gaming Mouse (logitech) Edit2: I am seeing no rupture, so it must be on the inside of my mouse, or inside the rubber, protecting the cable, that would be really inconvenient to search for

    Read the article

  • Should I choose KVM/XEN over OpenVZ or use them together?

    - by Krystian
    I've got a dual xeon e5504 server, with [for now] only 8GB of ram. Storage is'n impressive either: 3x 146GB sas in raid5 + 500GB sata drives. Currently it works as a development server, but it's over speced for our needs and since our development methods changed through last 2 years we decided it will work as a production system for some of our applications + we would like to have a separate system for testing/research. Our apps are mainly web apps deployed on tomcats [plural as some of the apps require older versions] and connected to Postgres. I would like to have a production system, where only httpd+tomcat+db are setup and nothing else runs there. Sterile system. Apart from that, I would like a test system, where I can play with different JVM settings, deploy my test apps, play with tomcat/httpd settings and restart them without interfering with the production system. Apart from that, I would like to be able to play with different linux flavors, with newer kernels to test how they work etc. I know, this is not possible with OpenVZ and I would have to choose KVM for that. I am thinking about merging the two, and setting up a KVM to be able to work with different systems [linux only to be frank] + use openVZ to setup separate machines for my development needs. I would simply go with that, but reading here and there about the performance impact full virtualization has over containers and looking at the specs of my server makes me think twice about it. I don't want to loose too much performance, especially because of the nature of my apps [few JVMs running at the same time]. It will be my first time with virtualization, apart from using desktop virtualbox/vmserver. Although I am a fast learner I don't want to mess with the main system so much that it will break the production apps or make them crawl. Although they are more or less internal apps and they don't produce much load, they need to be stable. I've read, that KVM host is a normal linux installation and it allows to run normal processes on it. If that is so, does it allow to run openVZ as well? I mean... can I have KVM and OpenVZ running on the same system/kernel? Or do I have to setup another system to run OpenVZ containers? How much performance impact can this have for me? Will my hardware suffice? oh and one more thing... unfortunately I'm quite limited with the funds... I'm looking for a free solution only :/

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • Detection of battery status totally messed up

    - by Faabiioo
    I already posted this question in the Ubuntu forum and stackOverflow. I forward it here with the hope to find some different opinions about the problem. I have an Acer TravelMate 5730, which is 3 y.o., running Ubuntu 10.04 LTS. One year ago I changed the battery because the old one died. Since then, everything worked like a charm. A week ago I was using my laptop running on battery; it was charged up to 60%. Suddenly it shut down and for about 24h it was like the battery was totally broken: it didn't charge anymore and the 'upower --dump' said state: critical. I was kind of resigned to buy a new battery, when suddenly the orange light became green: battery was charged and actually working; strangely the battery indicator was stuck to 100%, even after 2 hours running. I tried again with 'upower --dump' or 'acpi -b' commands and it kept saying battery is discharging, though maintaining the percentage to 100%. Thus, battery working fine up to 3 hours, without any warning when it was almost empty, likely to result in a brute shut down. Today something different. the 'upower --dump' command says: ... present: yes rechargeable: yes state: fully-charged energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.481 V percentage: 0% capacity: 100% technology: lithium-ion I tried to boot WinXP and the problem is pretty much the same, with the battery fully-charged, percentage equal to 0% and no way to fix it. While writing, the situation has changed again: present: yes rechargeable: yes state: charging energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.474 V percentage: 0% capacity: 100% technology: lithium-ion ...charging, but it does not charge up. (Recall, the battery lasted 3 hours until yesterday!). So, the big question is: is it an hardware issue, like a dedicated internal circuit is broken? or maybe it is just the battery that must be changed. Or, rather, some BIOS problem that could be fixed in some way. I'd appreciate every help that can shed some light on this annoying problem thanks

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • netbook intel GMA 3150 external monitor 1920x1080 flicker problem

    - by seyenne
    Dear all, i recently purchased a acer netbook (apsire one d260). It runs flawlessly. Yesterday I bought a samsung 23" TFT with a native resolution of 1920x1080. According to the information found in the internet and my local computer dealer, the intel chipset can handle the native resolution of the monitor. However, this is only partly the case. I use the VGA cabel to connect, the monitor instantly switches to the native resolution and now the problem: Occasionaly, especially the first 2 hours after booting up, I have a flickering all over the screen, sometimes the entire screen is shaking and spinning around like crazy. I figured out, that lowering the resolution avoids the flicker but this helps only for some time. I can rule out that it's the monitor's problem since I found no issues with another notebook. Right now, I have no problems with the netbook, for about 30 minutes I didn't experience any issues... But I don't know for how long, it occurs without warning :-) I'm worried that if I would bring the netbook back to the dealer and explain my problem, after testing it on an external screen in the local shop, everything works just fine... And I won't get helped with the problem because I can't prove it. (I'm currently in Thailand and over here, customer service is nothing like back home in Germany) What can I do? Is this a driver related issue? (I installed the latest GPU driver) Is it because of the VGA cable? (But why does it work sometimes without any problems and with no issues on the other notebook) I monitored the GPU/CPU temperature, nothing changes really over time..Can it simply be a faulty GPU and is a replacement justifiable? I'm really stressed now because for the time I'm writing, the flickering didn't occur...but for sure, soon or later it will happen again.. I Forgot to mention, the problem also happens if the netbook runs on battery, unplugged. So the only hardware that is plugged is the TFT screen. ...........and here it comes again, flickering has just begun. NEED HELP! Thank you all for reading through this and giving any suggestions if possible. Cheers

    Read the article

  • Possible HDD malfunction. Need help in diagnosing

    - by Protheus
    Today when using my PC as I did for almost 4 years I experienced the following: during opening new tab in Opera browser screen froze. Music (AIMP 3) continued to play for about 5 minutes and then stopped too. I tried Ctrl+Alt+Del, but win7 lock screen didn't appear. Caps\Scroll or Num locks didn't switch diodes on keyboard. I rebooted my PC and saw that BIOS suggests me to enter it's settings or load by default. I chose default. It don't see proper boot device (old faitful "insert proper boot" something). After second reboot it said that there is no ExpressGate installed (which i turned off in BIOS years ago). I went into BIOS setting to turn off ExpressGate and see configs: time was not set off, all hard drives present, temp and O.C. settings are nominal (no O.C.) I've inserted my Win7 install disk to try recovery. It did load awfully long (about few minutes) and didn't see current installation. PC was utilized in 24/7 mode for almost all these years. Hardware configuration: ASUS P5Q WS Core 2 Quad Q9300 (2.5GHz no O.C.) MSI geForce GTX 460 4x2 Gb GeIL EVO 2 (AFAIR) Seagate something 750Gb (4 years as system HDD 24/7) WD 1Tb (for random stuff, 5 y.o.) Hitachi 500Gb (for even more random stuff, 6 y.o.) NEC DVDRW (ALL DISKS ARE SATA) Cooler Master Silent Pro 700W Software: Windows 7 AND Kubuntu on the same drive with GRUB loader. Sorry I can't remember HDDs and can't see them right now, but I think their models aren't relevant anyway. My idea is that due to some system error or hard drive glitch i've wrecked my primary HDD's MBR. Nevertheless I don't exclude the possibility of other failure. May it's be that motherboard or it's SATA controller? Doubt it, because all drives are seen in BIOS and I could load from DVD. Maybe GRUB got bugged somehow, although I don't see how it's possible from Windows. But I did install KUbuntu from Windows (i wasn't myself then), maybe GRUB did write itself in some windows partition and got rewriteen in process? Right now I am at work with my flash drive with me and I need some advice how to fix MBR or to hear if it's not MBR. I'm going to buy new HDD (Hitachi 7k2000) because I think that my current HDD is compromised and it's unsafe to use it as system drive, especially 24/7.

    Read the article

  • How to reliably map vSphere disks <-> Linux devices

    - by brianmcgee
    Task at hand After a virtual disk has been added to a Linux VM on vSphere 5, we need to identify the disks in order to automate the LVM storage provision. The virtual disks may reside on different datastores (e.g. sas or flash) and although they may be of the same size, their speed may vary. So I need a method to map the vSphere disks to Linux devices. Ideas Through the vSphere API, I am able to get the device info: Data Object Type: VirtualDiskFlatVer2BackingInfo Parent Managed Object ID: vm-230 Property Path: config.hardware.device[2000].backing Properties Name Type Value ChangeId string Unset contentId string "d58ec8c12486ea55c6f6d913642e1801" datastore ManagedObjectReference:Datastore datastore-216 (W5-CFAS012-Hybrid-CL20-004) deltaDiskFormat string "redoLogFormat" deltaGrainSize int Unset digestEnabled boolean false diskMode string "persistent" dynamicProperty DynamicProperty[] Unset dynamicType string Unset eagerlyScrub boolean Unset fileName string "[W5-CFAS012-Hybrid-CL20-004] l****9-000001.vmdk" parent VirtualDiskFlatVer2BackingInfo parent split boolean false thinProvisioned boolean false uuid string "6000C295-ab45-704e-9497-b25d2ba8dc00" writeThrough boolean false And on Linux I may read the uuid strings: [root@lx***** ~]# lsscsi -t [1:0:0:0] cd/dvd ata: /dev/sr0 [2:0:0:0] disk sas:0x5000c295ab45704e /dev/sda [3:0:0:0] disk sas:0x5000c2932dfa693f /dev/sdb [3:0:1:0] disk sas:0x5000c29dcd64314a /dev/sdc As you can see, the uuid string of disk /dev/sda looks somehow familiar to the string that is visible in the VMware API. Only the first hex digit is different (5 vs. 6) and it is only present to the third hyphen. So this looks promising... Alternative idea Select disks by controller. But is it reliable that the ascending SCSI Id also matches the next vSphere virtual disk? What happens if I add another DVD-ROM drive / USB Thumb drive? This will probably introduce new SCSI devices in between. Thats the cause why I think I will discard this idea. Questions Does someone know an easier method to map vSphere disks and Linux devices? Can someone explain the differences in the uuid strings? (I think this has something to do with SAS adressing initiator and target... WWN like...) May I reliably map devices by using those uuid strings? How about SCSI virtual disks? There is no uuid visible then... This task seems to be so obvious. Why doesn't Vmware think about this and simply add a way to query the disk mapping via Vmware Tools?

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • Remote Debian System Preventing Logon

    - by choobablue
    I have a dozen or so single board computers on a network running Debian (squeeze) and access them via ssh (ssh server is dropbear). To give an idea of the hardware of these computers they're 1.2 GHz x86 processors, 1GB of RAM and 4GB flash drives formatted as ext2 (I avoided ext3 to prevent the added flash write stress from journaling), there is also a swap partition on the drive. Normally the setup I'm using works great and I can access all the computers. Every once in a while one will prevent access. What happens is I try to connect via ssh (putty) and it gives me the login prompt, I enter the username and password and it responds 'Access Denied' and it will also refuse any public key in ~/.ssh/authorized_keys. The credentials are correct as they worked previously. The computer responds to pings and putty recognizes the server public key, which implies to me the system is still running. Restarting the server fixes the problem and I can log in again. (I tried a temporary fix of putting shutdown -r now in the root crontab but this doesn't seem to reliably be run once the hang happens) Once I restart however there doesn't seem to be any information in any of the system logs to indicate what happened, the logs are simply empty for that time period, as if the system had crashed. There is some custom software running on the system which appears to stop working (which is why I wanted to ssh to begin with). I'm assuming that this program is the source of the problems but I'm unsure of how it would cause it and how to debug what is happening. The most likely explanation I can think of is that there is a memory leak in the other program that then prevents dropbear from spawning a new login shell (and crontab from executing shutdown) as there is not enough free memory. But looking at memory usage of the other (working) computers there doesn't seem to be any meaningful increase in memory to indicate a leak (unless it's a very big, fast acting and rare leak). I would think that when the OS ran out of memory it would restart the system or kill processes (the Linux kernel restarts right?). The other thing I wonder about is if the fact that they are running off a flash drive could have some effect, especially the swap partition (which I think I should remove to prevent wear of the flash), but the flash drives are young (~1 month) and I don't think that wear would be a factor yet. Does anybody have an idea of what could cause these symptoms, if it could be done by a memory leak, or something else I haven't thought of. And does anybody know of a method to try to debug the problem and find out more information about what's going wrong?

    Read the article

  • Windows 7 immediately disconnects a USB drive

    - by Daniel Saner
    I am having a problem with Windows 7 x64 consistently disconnecting one specific USB mass storage drive immediately after it is connected. The drive in question is a Cowon C2 digital music player which works in standard mass storage controller mode (i.e. no device-specific drivers needed/available). When I connect the player, Windows plays the "USB connect" sound and the device appears (under its correct name) in the device manager, but it never appears as a drive. The player itself displays "USB Connected" for a split-second before reporting that it has been disconnected again. Since the player, by design, reboots after it has been disconnected, Windows plays the "USB disconnect" sound before restarting the whole cycle once the player has powered back on. I am connecting the player through an Intel X79 Chipset motherboard (Gigabyte GA-X79-UD3) to Windows 7 Pro 64-bit. The player used to work fine the first few times I connected it, showing up as an external drive; it only recently stopped working. It is not a problem with the player, since it works fine when connected to another computer, even such running the exact same operating system. It is also not a problem with the USB controller, since the issue is the same on both the Intel USB 2.0 and the Fresco Logic FL1009 USB 3.0 controller ports. I have also not had the problem with any other drive so far. Among the things I have tried so far: Disabling USB legacy mode in BIOS Disabling energy-saving power down for all USB controllers in Windows' device manager Removing and reinstalling Windows' USB mass storage driver Removing and reinstalling Intel and Fresco Logic USB controller driver Restoring the player to factory defaults None of these made a difference. Again, the player used to work fine on the exact same system just days ago; I didn't install any new hardware or drivers on it since then. I would be very grateful for any hints on what else to try. Edit: Here is another new hint; I found out that when I connect the drive before booting Windows, it is available in Windows Explorer as it should, and does not automatically disconnect. If I remove and reconnect it though, the infinite connect/disconnect-loop starts anew.

    Read the article

  • HP ProLiant DL380 G3 Running Windows Server 2000 has crashed between 6-7:30am for the past 5 days

    - by user109717
    I have a HP ProLiant DL380 G3 running Windows Server 2000 that has been crashing everyday between 6-730am. This started when I changed out a failing hard drive 6 days ago. I have looked at the scheduled tasks which does not have anything pertaining to this issue. Below are the only things I see in the system log and some of the dump files. Can this be a hardware issue if this happens at a certain time frame everyday? Any help is greatly appreciated. Thanks The previous system shutdown at 6:07:55 AM on 2/7/2012 was unexpected. System Information Agent: Health: The server is operational again. The server has previously been shutdown by the Automatic Server Recovery (ASR) feature and has just become operational again. [SNMP TRAP: 6025 in CPQHLTH.MIB] BugCheck 7A, {3, c0000005, 3400028, 0} Probably caused by : memory_corruption ( nt!MiMakeSystemAddressValidPfn+42 ) Followup: MachineOwner 0: kd !analyze -v * Bugcheck Analysis * * KERNEL_DATA_INPAGE_ERROR (7a) The requested page of kernel data could not be read in. Typically caused by a bad block in the paging file or disk controller error. Also see KERNEL_STACK_INPAGE_ERROR. If the error status is 0xC000000E, 0xC000009C, 0xC000009D or 0xC0000185, it means the disk subsystem has experienced a failure. If the error status is 0xC000009A, then it means the request failed because a filesystem failed to make forward progress. Arguments: Arg1: 00000003, lock type that was held (value 1,2,3, or PTE address) Arg2: c0000005, error status (normally i/o status code) Arg3: 03400028, current process (virtual address for lock type 3, or PTE) Arg4: 00000000, virtual address that could not be in-paged (or PTE contents if arg1 is a PTE address) MODULE_NAME: nt IMAGE_NAME: memory_corruption BugCheck A, {0, 2, 1, 804137d6} Probably caused by : ntkrnlmp.exe ( nt!CcGetVirtualAddress+ba ) * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 00000000, memory referenced Arg2: 00000002, IRQL Arg3: 00000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: 804137d6, address which referenced memory MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >