Search Results

Search found 724 results on 29 pages for 'responsive webdesign'.

Page 18/29 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • What Is .recently-used.xbel and How Do I Delete It for Good?

    - by The Geek
    If you’re reading this article, you’ve probably noticed the .recently-used.xbel file in the root of your User folder, and you’re wondering why it keeps constantly coming back even though you repeatedly delete it. So What Is It? The quick answer is that it’s part of the GTK+ library used by a number of cross-platform applications, perhaps the most well-known of which is the Pidgin instant messenger client. As the name implies, the file is used to store a list of the most recently used files. In the case of Pidgin, this comes into play when you are transferring files over IM, and that’s when the file will appear again. Note: this is actually a known and reported bug in Pidgin, but sadly the developers aren’t terribly responsive when it comes to annoyances. Pidgin seems to go for long periods of time without any updates, but we still use it because it’s open-source, cross-platform, and works well. How Do I Get Rid of It? Unfortunately, there’s no way to easily get rid of it, apart from using a different application. If you need to transfer files over Pidgin, the file is going to re-appear… but there’s a quick workaround! The general idea is to set the file properties to Hidden and Read-only. You’d think you could just set it to Hidden and be done with it, but Pidgin will re-create the file every time, so instead we’re leaving the file there and preventing it from being accessed. You could also totally remove access through the Security tab if you wanted to, but this worked fine for me… as you can see, no more file in the folder. Of course, you can’t have the show hidden files and folders option turned on, or the file will continue to show up. Want to get really geeky? You can toggle hidden files with a shortcut key. Similar Articles Productive Geek Tips Hide Recently Used Documents/Programs From the Windows Vista Start MenuQuick Tip: Windows Vista Temp Files DirectoryDelete Wrong AutoComplete Entries in Windows Vista MailDisable Delete Confirmation Dialog in Windows 7 or VistaHow to Delete a System File in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily

    Read the article

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Introduction to Agile Development

    - by Grant Fritchey
    Even though my current job is a little weird, I still consider myself to be a DBA. I didn’t start that way in IT. I came through support and into development. I loved development. There was a constant struggle to attempt to improve your code, your understanding, and, most importantly, the process of development itself. Development can be slow and tedious. Left alone, developers can simply disappear to build a project and not come back for two years, at which time they deliver it. But, maybe that software isn’t what you wanted, or it’s no longer needed, or who knows what. So developers are constantly attempting to improve their processes in order to deliver more relavent software quicker (something DBAs could learn about). I really admire it. One of the many processes that has come out of that constant striving is known as Agile. As the name implies, Agile development attempts to come up with a quick, fast turning, business aware, well, for want of a word, agile, process that is more responsive to the needs of the business. There are tons and tons of books and blogs and videos on the subject that can get you going. But, Agile isn’t easy (note, Easy is not part of the name). Agile processes can be hard. I’ve worked on multiple agile teams, some successful, some not. The two principal differences between the teams were their discipline and their knowledge of the process. Discipline, that comes from within. But knowledge, ah, well there I can help. Red Gate is bringing a series of free instructional events to the United States in a few weeks time focused primarily on SQL Server (click here right now to register while there’s still space). We’re also offering some .NET instruction too. That’s a full day, free, with top experts in the business. But, the next day, there’s a full day session introducing Agile. You can go to this and learn how to do Agile. Develop that knowledge that will enable you to successfully use the Agile process. Go to this web site to check it out. No, this event is not free, but not everything can be. And it’s not just for developers. DBAs, you need to learn this stuff too. Management could also benefit from understanding these processes (because you guys can help to enforce discipline). It’s really for everyone involved in the development process.

    Read the article

  • iptables -P FORWARD DROP makes port forwarding slow

    - by Isaac
    I have three computers, linked like this: box1 (ubuntu) box2 router & gateway (debian) box3 (opensuse) [10.0.1.1] ---- [10.0.1.18,10.0.2.18,10.0.3.18] ---- [10.0.3.15] | box4, www [10.0.2.1] Among other things I want box2 to do nat and port forwarding, so that I can do ssh -p 2223 box2 to reach box3. For this I have the following iptables script: #!/bin/bash # flush iptables -F INPUT iptables -F FORWARD iptables -F OUTPUT iptables -t nat -F PREROUTING iptables -t nat -F POSTROUTING iptables -t nat -F OUTPUT # default default_action=DROP for chain in INPUT OUTPUT;do iptables -P $chain $default_action done iptables -P FORWARD DROP # allow ssh to local computer allowed_ssh_clients="10.0.1.1 10.0.3.15" for ip in $allowed_ssh_clients;do iptables -A OUTPUT -p tcp --sport 22 -d $ip -j ACCEPT iptables -A INPUT -p tcp --dport 22 -s $ip -j ACCEPT done # allow DNS iptables -A OUTPUT -p udp --dport 53 -m state \ --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p udp --sport 53 -m state \ --state ESTABLISHED,RELATED -j ACCEPT # allow HTTP & HTTPS iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -j ACCEPT iptables -A INPUT -p tcp -m multiport --sports 80,443 -j ACCEPT # # ROUTING # # allow routing echo 1 >/proc/sys/net/ipv4/ip_forward # nat iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # http iptables -A FORWARD -p tcp --dport 80 -j ACCEPT iptables -A FORWARD -p tcp --sport 80 -j ACCEPT # ssh redirect iptables -t nat -A PREROUTING -p tcp -i eth1 --dport 2223 -j DNAT \ --to-destination 10.0.3.15:22 iptables -A FORWARD -p tcp --sport 22 -j ACCEPT iptables -A FORWARD -p tcp --dport 22 -j ACCEPT iptables -A FORWARD -p tcp --sport 1024:65535 -j ACCEPT iptables -A FORWARD -p tcp --dport 1024:65535 -j ACCEPT iptables -I FORWARD -j LOG --log-prefix "iptables denied: " While this works, it takes about 10 seconds to get a password promt from my ssh command. Afterwards, the connection is as responsive as could be. If I change the default policy for my FORWARD chain to "ACCEPT", then the password promt is there imediatly. I have tried analysing the logs, but I can not spot a difference in the logs for ACCEPT/DROP in my FORWARD chain. Also I have tried allowing all the unprivileged ports, as box1 uses thoses for doing ssh to box2. Any hints? (If the whole setup seems strange to you - the point of the exercise is to understand iptables ;))

    Read the article

  • Experiences with BIRD for BGP?

    - by Shtééf
    We're currently using Quagga with Debian Linux to run a full table BGP router. The set-up has been dead simple up to now, but we've come to a point where I have to reconfigure the router quite a bit, and want to tighten things up. I've never really understood Quagga, and always found its documentation to be lacking. It appears to be mimicking Cisco, of which I only have basic understanding. BIRD has caught my eye recently. The couple of articles / presentations I found promote it as lightweight and more responsive under stress compared to Quagga. And it actually seems to have very decent documentation. So I'd like to know: Who's running BIRD right now, and in what kind of set-up? How is it stability-wise? I've read about it running in a couple of sites in production. Let's say I don't care at all for a Cisco-feel to configuration. How is configuration, maintainance, monitoring, etc. of BIRD in general? And any other notable experiences you may have with it.

    Read the article

  • How to repair Mac OSX without reinstalling?

    - by RahulVyas
    I have an intel PC. I have installed iDeneb Mac OS X in my pc. It's running fine. After that I thought to install Snow Leopard for running the iPad SDK. So I bought a retail Snow Leopard and boot it with Rebel EFI boot loader. When I was installing Snow Leopard, at the end of installation, setup gives an error. So I restarted my PC and boot with Rebel and I saw that mac was there so I boot that into safe mode and Mac OSX runs. After that I install iPad SDK. But when I try to create an application, XCode is not responsive. It hangs when I choose new project, give it a name and save it on disk. After just as I gave name and choose save it hangs. Is there any way i can repair my Mac OSX without reinstalling? I have also unable to boot into normal mode and also without Rebel CD. So I want to boot without Rebel CD and also want to run iPad SDK so please help me to solve this problem.

    Read the article

  • I love google Chrome, but some non-static pages like Piwik render it unresponsive

    - by gogowitsch
    The web-stat software Piwik stops reacting on mouse clicks after 1-2 seconds. The same is true for Google Maps and Producteev (but GMail and most other pages work like a charm). These rely heavily on JS, and work without Flash. I can click for a very short time period and then the mouse cursor doesn't feel the UI anymore (it doesn't turn into a I over input fields, though it moves; if the freeze occured while the pointer was over an input field, the cursor keeps being a I) and all clicks on the DOM are being ignored by Chrome. No message appears, neither obvious nor in the Console (F12). There is no obstructing div or the like in the DOM (F12). Since I couldn't find any hints on the source of my problems, I suspected my plugins and extensions. Unfortunately, neither deactivating all plugins nor all extensions solved the problem. for the problematic pages, it always happens no Dropbox running several GB of free RAM the taskmanager doesn't show any high CPU or memory utilization (the offending tab uses 30 MB and uses 0-1 % CPU) all problematic pages work in other browsers (Chrome, Firefox, IE) the rest of the computer is very responsive the computers use different security suites (Kaspersky and Avira) The effect exists between several (synchronized) Chrome instances on different machines, all running Windows 7. Both the OS and Chrome are updated automatically. Other tabs and the Chrome chrome (tabs, menus, toolbar buttons of the browser itself) still work. I really don't like switching between browsers. Any ideas?

    Read the article

  • Task Manager Does Not Start Every Time

    - by diek
    I have had a problem that started some time ago, 6 months maybe. I should have noted the first instance but I didn't. I am using Windows 7 Pro, 32bit. Under normal circumstances I can open up the the Task Manager, via the task bar or cntrl alt del. When I get a program stuck, causing a freeze or non-responsive system I try to open the task manager. It will not work. I have had plenty of similar problems in the past and I had no trouble getting it open. I have searched the internet but the only results I can find are when the task manager will not start under any situation. I am running ESET NOD32 as the anti-virus. The latest example happened when I opened a new tab in Google and tried to copy an image. Google accounts for at least 50% of the examples. Ran System File Checker tool, sfc /scannow as recommended on another post. No errors returned. Any guidance would be appreciated.

    Read the article

  • How do I run a stable Windows XP kvm guest on Ubuntu 10.04?

    - by Jean-Paul Calderone
    I have three Windows XP guests running on a recently upgraded 64-bit Ubuntu 10.04 system. Occasionally (on the order of once every few days), one of the guests will become non-responsive and the kvm process on the host which is running that guest will start consuming 100% CPU. It will continue to do so until it is killed. When restarted, it will be fine for a while, and then the issue repeats. The kvm command line used to run all three guests is this: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name bigdog21vmxp1 \ -uuid ea47ff84-125b-16f7-9a4d-a6d0d8bab46a \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/bigdog21vmxp1.monitor,server,nowait \ -monitor chardev:monitor \ -localtime \ -boot c \ -drive file=/var/lib/libvirt/images/windowsxp-1.qcow2,if=ide,index=0,boot=on,format=qcow2 \ -net nic,macaddr=54:52:00:02:06:0e,vlan=0,name=nic.0 \ -net tap,fd=58,vlan=0,name=tap.0 \ -chardev pty,id=serial0 \ -serial chardev:serial0 \ -parallel none \ -usb \ -usbdevice tablet \ -vnc 127.0.0.1:1 \ -k en-us \ -vga cirrus \ -soundhw es1370 Why do the systems misbehave this way sometimes? And what configuration can I change in order to fix it? Or, if the problem is due to a bug in kvm, what is the process for isolating a kvm failure so that the developers have a chance of fixing it?

    Read the article

  • How to repair Mac OS X without reinstalling?

    - by RahulVyas
    I have an Intel PC. I have installed iDeneb Mac OS X in my PC. It's running fine. After that I thought about installing Snow Leopard for running the iPad SDK. So I bought a retail Snow Leopard and booted it with Rebel EFI boot loader. When I was installing Snow Leopard, at the end of installation, setup gives an error. So I restarted my PC and boot with Rebel and I saw that the Mac was there so I booted that into safe mode and Mac OS X runs. After that I installed iPad SDK. But when I try to create an application, Xcode is not responsive. It hangs when I choose new project, give it a name and save it on disk. After just as I gave name and choose save it hangs. Is there any way I can repair my Mac OS X without reinstalling? I have also been unable to boot into normal mode and also without Rebel CD. So I want to boot without Rebel CD and also want to run iPad SDK.

    Read the article

  • Need Suggestions on Backup Strategies and Alternatives?

    - by Leejo
    I'm not sure where else to post this question since it is not exactly Code or Development related...but I know Stackoverflow is a very responsive to questions... Currently, I use Mozy Home to perform an online backup of my laptop. So far, this works well, since I only use one laptop that needs to be backed up. But, soon this may change and I want to explore other alternatives than having to perform an online backup on all machines. Ideally, I want to set up a Network Computer (Laptop/Desktop) with enough storage to hold the backups for all other machines that I would have. Each machine should be responsible for performing their backup (to the Network Computer). This would require some capability like Mozy's incremental backup strategy, but instead of online backup, I would prefer it to be done locally to the Network Computer. Can you recommend a local backup software (backup to a network pc, incremental backup, good restore options)? I'm also looking for any ideas on a local backup strategy even if its different from what I've stated? What works and what doesn't work? Thanks in advance for your help!

    Read the article

  • Recommendations for colocation in the US

    - by Emil
    Hello serverfault I work for a European media company and we are currently looking for colocation in the US. I know the European market quite well unfortunately that is not the case for the US. I'm hoping for you guys to help me out a bit with a few questions, it would be much appreciated! I am looking for a data center that can deliver a high level of availability (tier 3 or better). The installation will be fairly large so capacity is important. Good internet connectivity/carrier presence. However most important is good customer support, skilled dedicated and responsive technical staff, since we won't have tech staff close by. I'm looking for a small and fast moving company that target internet businesses rather than big old enterprise hosting. What locations should we go for given that we want to reach all of the US from a single site and still maintain decent latency? (do we need east and west coast?) Where are the main internet hubs and should you try and get as close as possible? Are there any good online resources I should look at? Where do the large scale internet/media services colocate? Lastly I would be very happy to get some actual recommendations for companies to talk to P.S I'm happy to return the favor if anyone has question regarding data centers and colocation in Europe.

    Read the article

  • Ubuntu 12.04 - HP ProLiant DL380 G4 - Load Maxes Out / Unresponsive

    - by Brian S
    Trying to post this question on here. I've posted it on the Ubuntu forums as well with no replies. Recently I upgraded an HP ProLiant DL380 G4 server from Ubuntu 10.04 server to Ubuntu 12.04 server. Upon doing so, the server will not - at random times - get to a load of 400+ and then become completely non-responsive. I use an SNMP graphing program (cacti) and the load steadily increases by about 10 every five minutes until it gets over 400 and the graphing stops. The graphs may not be accurate, but the CPU load averages about 3% before this happens - and right when the load starts increasing, it jumps to about 25% for 15 minutes and dramatically dips down to less than 1% (about 0.3%) until the graphing stops. I'm not able to open a SSH tunnel to the server to do anything. I've checked the /var/log/syslog and all logging stops at that time as well - with nothing else in there. The odd thing is - the server still responds to DNS queries for the zones it is authoritative on during this time - and at normal speed. Just not sure what the next step would be in order to find out what is going on - and how this issue can be corrected. The server cannot stay with Ubuntu 10.04 Server and needs to stay upgraded.

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

  • What would cause my Windows machine to seize?

    - by Coltin
    I run Windows XP SP2 on a Desktop I built (I know that doesn't provide much, but I can't say "on a Dell X103938R" or something). It has 'seized' about 7 times in its year and a half life. Everything freezes. I can't move the mouse cursor and the keyboard seems unresponsive. I can turn the monitor on and off and it will hold the last image. The light for the mouse is responsive if I move it. The keyboard lights change when pressed (cap locks, etc). I've waited upto ten minutes for a change, nothing. I haven't connected any activity to the seizing. It's happened when all I was "running" were fullscreen programs (games), just checking email, or once when I was sitting at my desktop (I was reading a book and when I tried to use the mouse, nothing). I've never been able to figure it out. I have to hard reset, and then its fine. It doesn't file system check or anything (not sure if Windows does that). No error when I load up the computer, nothing. If I had uTorrent open, it will have to recheck the torrent files to make sure they weren't corrupted though. (It's not always open when it seizes either). I'm using an AMD Athlon 5400+ with a NVidia GeForce 8600 GT, if that helps. I'm using two hard drives, 500Gb Wester Digital with a 1Tb Hitachi.

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Monitor goes black for a few seconds

    - by privatehuff
    I have a Hanns G 28" monitor, Model # HG281D It has its issues (viewing angle sucks) but has been functional and solid, great for desktop stuff. Worked without any sign of any problems for 6-12 months. However, now the monitor "goes black" for about 2-3 seconds, almost like when you click "detect display" It does not turn off (power light does not go amber) The computer is completely unaffected and the video mode never changes when the picture returns. The computer is fully responsive and will keep playing music or taking my keypresses during the time I can't see anything. (it just happened and I kept typing, etc) It happens on multiple computers across several operating systems. (I have an 8-port iogear KVM switch that has several computers connected) But, it seems to happen only on certain computers. I have a hackintosh that does it, a windows 7 PC that does not, a lenovo laptop that does not, and my old ubuntu 8.10 box did not do it, but my new mint 8 box does do it. I've check the connections and tried changing out the power cable and the vga cable. Sometimes it won't happen for hours (or days) and sometimes it happens several times per hour. It was happening many months ago, did not happen for months, and has now started happening again. Does this make any sense? What could it be?

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • On what should i not be pennywise buying a machine for SqlServer 2008?

    - by Michel
    Hi, i'm going to do a project for a client and i'll be hosting the database server myself. Normally it would be on my dev machine, but there will also be data pushed into it during developing and testing, so i would like to setup a dedicated test sql server. But, as you might guess, i can't afford to go to Dell and buy one mega 16 core 16 GIG 10 TB raid 5 machine (wow, that sounds cool) So i have to save the money somewhere... the hardware only has to live for a year (longer is nice of course), and the sql server won't be hit too hard: i guess the average server will only see it as a cough once in a while. But i do want the machine to be a bit performant: if it does get some data, it must be a bit responsive. So my question is were can i leave out the expensive parts: is 2 GB enough, or must i take 4GB, is an average processor enough or should it be a top of the bill? Is Sql server a large resource user or is a simple desktop pc good enough? It wil run on win2008 by the way.

    Read the article

  • I am looking for a tool to measure or detect "unresponsiveness" of a desktop PC

    - by Tom H
    I have a client that provides some server systems to a hospital, and a support ticket was raised that the desktop application was hanging waiting for the server. We did some extensive testing and its pretty clear that the server is responsive, and the network is fine, and that the problem is on the client end. (no requests are received during the hang etc...) We take a look at the desktop machines and they should be fine, so we raise tickets with the software vendor who says that it must be the hardware, the hardware company says that it is the software, etc etc Anyway, so talking to the nurses, they say that these machines often "hang" for 30 seconds at a time, and sometimes during important moments where they need to get data for a patient who is unwell, such as charts and status. So I want to stick a client on these machines that would be able to detect arbitrary "unresponsiveness" of the keyboard/mouse and log that for analysis later. Obviously I am wary to suggest some application that takes resources and makes the problem even worse, so I would interested to see any tools that would detect these (is it correct to say that the keyboard interrupts are being discarded?) scenarios by looking for the OS discarding the interrupts, or whatever is appropriate here. so go on then serverfault, here is your chance to save a life.... ;-) Edit: I am starting to think that some of the tools associated with real time systems might be appropriate, at least as a diagnostic.

    Read the article

  • GNU Screen and Finch Not Playing Nicely

    - by Sean M
    I use finch for instant messaging, and for persistence, finch is one of the things that runs in my screen session. There are three main computers that I access my screen session from, and each works at a different screen resolution. Because of the different resolutions, when I switch computers, I use screen -rd to attach to my screen session. Using screen -x results in problems. When I attach to the session, though, finch experiences display problems. I have to wait up to several minutes for finch to become responsive - it doesn't redraw properly at all. Trying to switch between chats just writes ^n and ^p, or ^(1-9) for numbers. It fixes itself after some time. Using ctrl-l does not help. Switching back and forth between screen windows does not help. This is an annoying behavior that I don't experience with any other applications running in screen. Is this a bug in screen or finch, and if not, what can I change about my configuration to correct it ? (would appreciate it if "finch" could be used as a tag for this instead of or in addition to "pidgin")

    Read the article

  • Possible to host CentOS netinstall files on a local HTTP/FTP?

    - by garlicman
    I'm running XenServer on an Dell R610 and am running into a catch-22. During install from DVD, CentOS can't find the DVD package catalogue. It's a reported error for some, XenServer + CentOS6 + DVD install in some hardware configurations = failed install. Yes, I checked the MD5 and let the disc test pass. In every reported case, the netinstall was the solution. The issue is my net access is required to go through a web proxy that prompts before you can download a file. This naturally breaks any download automation. I've been waiting on our IT to put in an exception rule to allow my lab to bypass the prompt, but it's been over 3 weeks now and they don't seem responsive. (I've been working on this a day or two a week) I want to try and host the netinstall files local in my Xen network. Right now I only have a bunch of Windows based VMs, CentOS won't install so I don't have any Linux tools. I had tried simply hosting all the DVD contents off one of the Windows servers using Mongoose. (I didn't want to setup IIS) I copied them to a hosted sub-directory similar to all the mirrors out there (e.g. http:///centos/6.2/os/i386/) with no auth or anything. Then in the netinstall I correctly pointed to it. I now realize just copying the DVD files over won't work. The repodata will point to a local device, not the site I'm hosting. (e.g. the DVD repodata includes xml that points to where the packages are) Clearly I'm hosting them over HTTP, not from a DVD. Is there an easy way to sort this out? I'm just trying to install CentOS6 on Xen. If there's a turnkey downloadable Xen image with CentOS 6.2 on it, or a downloadable repo image, I'll take that too! Thank you in advance!

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >