Search Results

Search found 7179 results on 288 pages for 'slow logon'.

Page 119/288 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Windows 7 freezing after returning from idle.

    - by myname
    If i return to my windows 7 x64 computer after it has been idle for a couple of minutes it will respond for 3 seconds and after that everything freezes for around 5-30 seconds and then the computer starts working normally again. The freezing time varies and some times it even get stuck there forever requiring a hard reboot. This is very annoying as you can imagine, it's almost as slow as resuming from hibernation every time you left it idle enough to turn off the monitor.

    Read the article

  • My home box as my own host?

    - by Majid
    Hi all, I have a 512 kb/s DSL service at home. I do not have a static IP but I can get one if I pay some extra to my ISP. Now, if I get the static IP, can I make my home box act as my internet host? What else do I need? Thanks P.S. I know that if at all possible, the site I make available this way might be slow, that is alright, my question is if it is possible at all.

    Read the article

  • install mingw and msys to run c programs on windows

    - by Hesham Abouelsoaod
    I installed MinGW by using mingw-get-inst-20111118.exe and it works but it is very slow! i don't want to install it online, i remember that i have previously installed MinGW and msys by using two files mingw.exe and msys.exe without using the internet and it was great, but now i cant repeat what i have done and i cant find a link to mingw.exe! please i want a simple steps for offline better installation? thanks

    Read the article

  • Does anybody knows a replacement for TorrentSpy?

    - by Ither
    Hi, I've used TorrentSpy since 2004 and it does a pretty good job but it is very slow with torrent files that have hundreds or thousands of files. Does anyboy knows a better tool for the same job in XP? If you didn't know it, TorrentSpy show the data contained in a torrent file in a readable way: URL's tracker, number of files, its size, number of complete and incomplete downloaders, etc. Edit: What I want is something that can be used from explorer right-clicking in the torrent file, like MediaInfo does with media files.

    Read the article

  • Bit-shifting a file

    - by mykhal
    I wonder if there is an utility to read and print a (binary) file, shifted by some amount of bits (i mean, it should accept amounts, which are not divisible by 8). .. something like dd (and its skip option), but bit-wise, instead of byte-wise. (If you think that there is no such thing, and are going to implement it here, please use C.. i have my own bit-shifting thing for strings, written in Python, but it is surely relatively slow as hell)

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Zeacom UC Compared To Microsoft UC

    - by Kia
    Which is a better solution? Zeacom's Unified Communications or Microsoft's Unified Communications (UC)? Which one has your company implemented? I heard Microsoft coined the term "Unified Communications" but they were slow to jumpstart it... Other companies such as Zeacom have been working on and improving on their UC product since years ago. But Microsoft is such a standard. Which one would you go with?

    Read the article

  • Is my external hard drive dying?

    - by thatotherguy
    I keep getting this error lately when I try to copy large (200+ MB) files over to my external. Following this, the disk becomes unresponsive and I got to unplug it and plug it in again to work. The copy process also is unreasonably slow. It is worth noting that this happens on Windows too, so it's not the notorious "Error -36" bug OS X had prior to 10.6.3. The disk is a Western Digital 3200BMV. Any ideas?

    Read the article

  • Cannot download Microsoft Project 2010 demo videos

    - by Nam Gi VU
    Hi everyone, I need to download the demo videos at my office so that I can view it later at home since I have slow internet connection there but it seems to be no way to download the videos at [the site][1], and other resources in this site. Using some of the firefox addons to download them is not possible. Do you have any tips for me? Please share. [1]: http://www.microsoft.com/showcase/en/US/search?phrase="project conference 2009"

    Read the article

  • Xubuntu Running Slower than windows with constant freezing

    - by Joseph
    So I switched my computer to Ubuntu after some issues with Windows, then switched the GUI to Xubuntu after I noticed Ubuntu and Unity were painfully slow. I'm running an i5 with 8 gigs of ram and I still experience at least one full freeze (where I can't do anything and have to unplug) per week. REISUB does nothing and never has. I am constantly running Virtualbox because I still need Windows. Any thoughts to prevent this annoying freezing?

    Read the article

  • How to make sure all mails using Postfix?

    - by Ngompol2
    I'm using Ubuntu 10.10 32 bit. This is new server with nginx, php-fpm and PHP 5.3 I will install postfix. Currently the server can send mail (maybe sending through sendmail) but very slow until PHP timeout. To install, I will run: sudo apt-get install php-pear sudo pear install mail sudo pear install Net_SMTP sudo pear install Auth_SASL sudo pear install mail_mime sudo apt-get install postfix But after Postfix installed, how to make sure all mails using Postfix?

    Read the article

  • Does urandom share the same entropy of random?

    - by ???
    Does the entropy pool /dev/random used the same to /dev/urandom? I want to mknod /dev/random 1 9 to replace the slow random, I think the current entropy is randomly enough, if urandom is based on the same entropy, and all succeed random numbers are generated based on that entropy, I don't think there'll be any vulnerable.

    Read the article

  • Performing task on remote window server 2003 machine

    - by Vaibhav Jain
    I have to perform various task such as restarting process, monitor process state check disk space, log monitoring on Window server 2003 machine. For this i am using remote desktop access which is very slow. Is there any alternate (Tool or Framework) for windows server where i can execute my script on my machine and the required task will be performed on the remote machine in somewhat interactive manner (like putty in linux)

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Is it possible to download A torrent file completely and quickly with no uploading, but let the file be uploaded by other users after it has finished?

    - by B-Ballerl
    Is it possible to download A torrent file completely and quickly with no uploading, and then after seed the file to be uploaded by other users? I'm finding that the downloading process is incredibly slow while downloading some torrent file with incredibly high upload rates. Is it possible to speed up the downloading rate by cutting off uploading completley until the file has been completely downloaded??

    Read the article

  • Checking if Intel VT-x acceleration is enabled from inside a VMware virtual machine?

    - by user269950
    My (Fortune 500) company just rolled out new VMs and everyone is complaining they are dog slow. Is there any way I could verify, from inside a VM, whether Intel virtualization (VT-x) acceleration has been properly enabled? The processor claims to be a Xeon E7-2830 but the experience has been more like a first-gen Atom. I'd ask IT directly but I get the impression they're unlikely to respond to any suggestion that they are, in fact, drooling imbeciles.

    Read the article

  • Open Office: How to disable image link updates

    - by Max Kielland
    I'm writing a user manual to a card game and there is a looot of linked images. Open Office is working so slow because every time I flip to a page with linked images it starts to update them. Is it possible to tell Open Office to NOT update the links until I tell it to do so? I would like it to display the same snapshot it showed the last time I initiated link update. I'm using Open Office v3.3.0 // Thank you.

    Read the article

  • Radeon X1300 acceleration

    - by user30966
    Hello, I've just bought a Samsung XL2370 with a native resolution of 1920x1080. Should a Radeon X1300 be capable of shifting around windows on a screen that size? Because maximising and minimising windows, scrolling in Firefox and VS2008 seems very slow and jerky. Does the Radeon X1300 have any hardware accelearation? My old display was only 1028x768 and I never noticed any problems. Maybe is it time to buy a new graphics card? Thanks, AJ

    Read the article

  • On ubuntu 10.04, what is the recommended RoR stack?

    - by Kurucu
    I can't find clear answers / methods on this. As seen elsewhere, passenger and RoR under apache gobble up ram on my VPS. I've tried a multitude of stacks and implementations, currently resting on a sub optimal apache/cgi/rails configuration, which has swapped my ram usage for CPU time and slow response to requests. Can anyone recommend an efficient and preferably simple to administer method of setting up rails apps in ubuntu 10.04 server?

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >