Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 489/672 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • Track kids browsing history even when they know how to clear it manually

    - by Darren Newton
    I have a colleague with two teenage boys (yes, cue cliche's about 'I have this friend see...') He's currently having issues with them browsing pr0n and wants to do a little spying on their browsing (I'm staying clear of the philosophies/ethics on this.) The kids are savvy enough to clear their browsing history when they're done. As I'm his goto for IT he has asked me if there is a way to keep a hold of the browsing history. The family uses Macs, and the kids surf with Safari. I know that browsing history is kept here ~/Library/Safari/History.plist. I figure there should be a way to write either an AppleScript or other script (Python/Ruby/Bash) that can backup this file to a different location (/opt/local/history, etc.) Since the kids know to clear their history when they're done should the file be periodically backed up with something similar to a cron job or something like Hazel? While that could work it seems like it would create a ton of little incremental backups. Or is it possible to 'watch' ~/Library/Safari/History.plist and incrementally add changes to a backup file (saving a diff so to speak) but not lose any data? Any ideas/solutions appreciated. UPDATE/EDIT: Got the word from concerned dad that the oldest uses Firefox on a different PC, so the OpenDNS solution (preferably at the router level) is the best answer so far as it would capture usage for the whole house.

    Read the article

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • Nginx proxy to IIS Connection Timeout

    - by MitMaro
    I am having an issue with random timeouts with a Nginx proxy connecting to an IIS machine. I have been watching a packet capture between the two servers and it seems that the IIS machine is receiving a SYN packet but is not responding with what I think should be an ACK response. Before the timeout occurs there seems to be a slower response from the IIS server. There is no unusual memory or processor usage on the IIS or Nginx machine. Some information on the servers and setup: Nginx Machine: Ubuntu 10.04 64bit Nginx 0.7.65 Amazon EC2 Windows Machine: Windows Server 2008 IIS 7 ASP.net Application in Integrated Mode Nginx Error: 2011/01/10 17:57:40 [error] 8297#0: *30 connect() failed (110: Connection timed out) while connecting to upstream, client: 209.***.***.***, server: secure.example.com, request: "GET /a/path/deliver.aspx HTTP/1.1", upstream: "http://***.***.***.****:****//another/path/deliver.aspx", host: "secure.example.com" WireShark Packets 6521.449528 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422103 TSER=0 WS=7 6524.443239 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422403 TSER=0 WS=7 6530.443241 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477423003 TSER=0 WS=7

    Read the article

  • Win 7 crashes, PC reboots and says "Hard drive 0 not found" until I turn if off and on again

    - by Danny T.
    I recently made the move from Windows XP to Windows 7. Since then, when my computer is on for a few hours it always ends up rebooting without warning. Then the BIOS won't recognize my hard drive (hard drive 0 not found). If I turn off my computer and then on again, it boots normally. Some details: Dell Dimension 9150 Windows 7 I updated the BIOS I updated all system drivers with the latest version from Dell (SATA, Chipset, etc.) Other drivers updated too (Graphic card, sound, etc.) There is one thing that I tried after some Googling: I turned off the DMA access to the drives, but it's still rebooting after a few hours. Any clue? UPDATE 2010/12/13 Here are the events from the Event Log for today, from when I turned the computer on until it crashed: 19:17 - Error - ID 10016 - DistributedCom 20:06 - Error - ID 1008 - Customer Improvement Program (could not send data to Microsoft) 21:48 - Critical - ID 41 - Kernel-Power (System was restarted without proper shutdown) 21:48 - Error - ID 6008 - EventLog (Previous system down was not planned) 21:48 - Error - ID 1101 - EventLog (Audit Event ignored) 21:49 - Error - ID 10016 - DistributedCom Both DistributedCom events have a description along these lines (translated from French): The authorisation parameters specific to the application are not allowing Local Execeution for the COM server application with the CLSID {C97FCC79-E628-407D-AE68-A06AD6D8B4D1} and the APPID {344ED43D-D086-4961-86A6-1106F4ACAD9B} to the SID AUTHORITY NT\User System (S-1-5-18) from the address LocalHost (LRPC usage). This security authorisation can be changed with the Component service administration tool. UPDATE 2010/12/31 Here are the error messages I have on blue screens : STOP C000007xA - Kernel_Data_Inpage_Error "Unkown hard error" C00000135 - Can't start because &hs is missing

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • Php5-fpm Crash if much visitors

    - by chillah
    I decided to change my OP to Nginx from Litespeed because i read much about the low resource that Nginx would cost. Im running a Wordpress site with 500 users online My www.conf in "/etc/php5/fpm/pool.d" pm.max_children = 200 pm.start_servers = 20 pm.min_spare_servers = 20 pm.max_spare_servers = 60 ;pm.process_idle_timeout = 10s; ;pm.max_requests = 1000 Since i changed that config to higher requests and more children the Site needs longer till it shows me a White blank page. If i restart the php5-fpm prozess then, the site is running fine again. The CPU usage is at 5% if the php5-fpm is crashed and if its running its about 20 to 90%! Heres my Nginx.conf: user www-data; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 3048; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; I already tried changing the settings, tryed all variants but nothing worked. The Mashine: Dualcore 4gb ram

    Read the article

  • DHCP Relay setup in ubuntu server

    - by jerichorivera
    I have a network appliance (QNO) that works as traffic load balancer and dhcp server. I would like to add a linux server in between the network appliance and the client computers. The linux server will be used to monitor bandwidth usage. My problem is I still want DHCP to be served by the network appliance so that load balancing will still work efficiently. We are afraid that if we setup the linux server as the DHCP server the network appliance will not be able to load balance the traffic if it only sees the linux server as a single client connecting to it. I've been searching all over for a tutorial on how to setup DHCP relay but have not found any. How do I setup DHCP relay on my linux server given there are two NICs attached to it, one connects the linux server to the network appliance and the other connects the linux server to the client computers. EDIT Router (DHCP) ---- [eth0] Linux Server (Relay agent) [eth1] ----- PC (network) Router IP is 192.168.0.100 eth0 is on DHCP eth1 is static 192.168.2.11 (if I need to change this I can) Tried to do dhcrelay -i eth1 192.168.0.100, but the PC was not getting any DHCP lease from the DHCP router. I might be missing something here.

    Read the article

  • toshiba a200 laptop hangs/freezes when plugged in

    - by tapan
    The subject says it all. It is the exact same problem as described here. I had asked a similar question before here. My system specs etc, everything is here. My problem is just that I cannot afford to get it fixed right now. I just need for the laptop to be kind of usable for 2-3 more months. It worked perfectly for 3 months or so in ubuntu recovery mode. Now it hangs/freezes every hour. I spend 8 hours a day on an average on my laptop. So i just want it to last at least that long. If the laptop is not plugged in in recovery mode / regular ubuntu mode, it freezes in 5 mins and the screen becomes all weird. With my windows 7 OS, it works perfectly on battery but hangs instantly as soon as it is plugged in. Also now, a high cpu usage doesn't help anymore. Any strategy or suggestions i could employ to extend the duration for which it doesn't freeze ?

    Read the article

  • Disk Activity Alert Windows SBS 2003 on Dell PowerEdge 830 with Raid

    - by Ron Whites
    Background: I have a Dell PowerEdge 830 Server running Windows SB Server 2003. It has 4gbs of RAM and a ATA CERC SATA 6CH controller with 3 160gb drives in a Raid 5 configuration. The Problem I am seeing Admin ---"Disk Activity Alert on Server" emails These often occur when disk backups, de-frag or high disk usage is going on. Generally the server isn't over stressed. The Disk Alert emails say in part ... The following disk has low idle time, which may cause slow response time when reading or writing files to the disk. Disk: 0 C: F: D: Review the Disk Transfers/sec and % Idle Time counters for the PhysicalDisk performance object. If the Disk Transfers/sec counter is consistently below 150 while the % Idle Time counter remains very low (close to 0), there may be a problem with the disk driver or hardware. The Questions I have: With what utility can I review the Disk Transfers/sec and Idle Time? It appears there is no utility for that on the server! I think I may need to download a very large (two DVD) Dell "OpenManage" utility to be able to monitor the raid system and see what is a problem is that true?

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • Does using a hexacore CPU make sense?

    - by Exa
    I'm currently planning to upgrade my computer system and I want to exchange CPU, board and RAM. I already had a look at some hexacore-CPUs from AMD and would like to know if it makes any sense to use such a CPU with six cores. Is there any software which really uses six cores? Especially in gaming? I'm using this PC mostly for gaming and from time to time for developing. I know that on the dual-core system (2 x 3GHz) I currently use, Visual Studio creates two instances of the compiler, one for each core. Would there be six instances of the compiler on a hexacore system for super fast compiling? Is there any software that uses six cores? Would running two applications cause the usage of more CPUs? (For example two CPUs for a game you're playing while two other CPUs are used for compiling at the same time) I hope someone can point out the benefits of a hexacore system. The OS would be Windows 7 64 Bit and I use the PC for gaming most of the time. (Crysis 2, CoD, stuff like that)

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • How do I change file protections running XP on a disk from Windows Server?

    - by cdkMoose
    I had a Windows Server 2003 machine running at home, along with my desktop which I use for development. Server went belly up, but since my desktop is reasonably powerful, I figured I would move the disk from the file server (it was OK) into my XP machine to keep all of the files. Disk comes up fine and shows all of the files. I have been getting access denied errors when trying to work with some of the files. When I display attributes in Explorer, none of them are marked Read-Only. When I view properties on the directories, the Read-Only checkbox is not checked, but has a green background(which I thought meant mixed usage for files in the directory). When I click on the checkbox to clear it and click Apply, the disk does some work and all looks well. However, I continue to get the Access Denied errors, the files still don't show any Read-Only attribute and the directory properties shows the green background again on the Read-Only checkbox. I did check the box which says to apply the change to the folder and all files /subfilders under it. I am assuming that the issue relates to userids/permissions carried over from the Server install. So, why does it let me think I can change the attribute when I can't and how can I correct this problem so that the disk correctly recognizes the ids from XP?

    Read the article

  • Why does Windows/Microsoft Updates always take such a long time to detect available updates?

    - by RLH
    It's a common task for many of us who work in any form of IT position using Windows. Eventually you have to install/re-install a version of Windows and what follows is a very long OS updating process. For a long time I have accepted the fact that this is a slow process and that's all there is to it. There is a lot to download, and some updates require restarts followed by further updates... Ugh! This morning I had to go through the process of installing Windows XP with SP3. I'm installing the OS on a VM on an SSD and I've been working on this thing for over 6 hours. Although, think there are many ways to knit-pick this process for improvements, there is one step that is always particularly slow and I can not figure out a good reason why. That step is the detection step on a manual update. Specifically, when navigate to the Windows (or Microsoft) Updates page, and then click the 'Custom' button to detect your updates. It appears that your PC just sits there for a painful amount of time. Check your Task Manager and it looks like your PC is, in fact, locked because your CPU isn't cooking but that's certainly not the case. Somethings happening but I have no clue what's going on? What is the updating software doing? If the registry was being searched, shouldn't my CPU usage peak? Does anybody know what's happening? I can loosely justify why some of the steps in the update process take so long. However, this one doesn't seem to have any reasoning.

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • Slow upload, fast download on Windows 7 64bit system

    - by Malik
    I've got a weird problem in the download speeds on my desktop PC (Windows 7 Home Premium 64bit) are consistently fast (approx. 400kB/s) but uploads are very slow (around 6-10kB/s). This has been going on for the last 3 weeks or so. I am a very competent user and troubleshooter, and have searched online for 2 weeks for a solution, to no avail. Part of the problem is that internet is provided by WiFi by my landlord and I have no access to the router (BT Home Hub router) although I know for sure he wouldn't have the first idea on how to restrict my usage :) (rules that out) Anyway, I've tried: - various drivers (my Wifi 'card' is TP-link TL-WN851N, and I've tried TP-link + Atheros + Qualcomm Atheross drivers, suggested by Microsoft) - various tweaks to network parameters (e.g. as suggested by SpeedOptimser) - various tweaks to Windows 7 services (e.g. disabling/manual-ing unecessary services) - raising and lowering head onto a reasonably firm surface at moderate frequency (jk :D) None of the above have helped, and I'm officialy asking for help now!! Thanks for your time and effort in advance!

    Read the article

  • Choice of filesystem for GNU/Linux on an SD card

    - by gspr
    Hi. I have am embedded ARM-based system running on an SD card. It's currently Debian GNU/Linux using ext3 as filesystem. As I'm about to reinstall the system, I started wondering about changing to a more flash-friendly filesystem. I've heard about JFFS2, YAFFS2 and LogFS, and they all seem suited to the job. Which one would you recommend? Also, I've heard there have been a lot of ext4 improvements to better suit SSD disks; am I to interpret that as running ext4 should be just fine? What do I need to think especially about in that case? I guess the usage of the system is important. But for the sake of generality, imagine it'll do standard desktop stuff (even though it is infact a small ARM-based system). Thanks for any replies. Edit: Wikipedia tells me (in a "citation needed" statement) that Removable flash memory cards and USB flash drives have built-in controllers to perform wear leveling and error correction so use of a specific flash file system does not add any benefit. Thus, I'm leaning towards sticking with an ext filesystem.

    Read the article

  • What is the best hosting option for Flash web-widget?

    - by par
    Our Flash web-widget has got highly popular. It is downloaded around 100,000 times per day. And that is the problem. Our server bandwidth is too narrow to deliver the widget to the clients fast. The widget is loaded very slow. Probably 20 times slower than before (at peak times). Probably I have choosen not the right hoster for my task - delivering 1 MB Flash widget to 100,000 users per day. What is the best hosting solution in my case? I'm not good at server administration so forgive me if I sound naive. The details are the following. Our hoster options: -Dedicated server, Ubuntu -10 Mbit Connection -monthly bandwidth limit: 2000 GB Widget size is 1 MB. The widget consists of the main SWF and a number of loaded SWF and data files. This is a part of Apache Status report taken right now ---- Server uptime: 1 hour 2 minutes 38 seconds Total accesses: 74865 - Total Traffic: 5.8 GB CPU Usage: u28 s7.78 cu0 cs0 - .952% CPU load 19.9 requests/sec - 1.6 MB/second - 81.1 kB/request 200 requests currently being processed, 0 idle workers WWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWCWWWW WWWWWCWWWWWWWWWWWWWWCWWWWWWWWWWWWCWWWWWWWWCWWCWWWWWWWWWWWWWWWWWW WWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWWCWCWWWWWWWWWWWWWWWCW WWWWWWWW........................................................ ----

    Read the article

  • External component has thrown an exception. ASP.NET ASPX PAGE POST

    - by Brandon
    I have an aspx page that communicates with a webservice I have. It connects to an SQL Server database on my virtual dedicated server. With just a little usage, I get this error External component has thrown an exception. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.SEHException: External component has thrown an exception. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SEHException (0x80004005): External component has thrown an exception.] Luxand.FSDK.Initialize(String DataFilesPath) +0 WebService.onLoad() +70 WebService..ctor() +91 facematch.btn_submit_Click(Object sender, EventArgs e) +218 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +105 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +107 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +7 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +11 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1746

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • Computer freezing while watching Flash videos from net

    - by t3st
    I have a Windows 7 home basic, while watching videos from net (within 5 minutes) computer starts to freeze and shows 100 percent CPU usage. I first thought it's a browser issue but watching videos from different browsers also has the same issue. My system runs the latest Firefox browser and all my plugins (including Flash) is up-to-date. After this when I shutdown/restart the computer it will go to the login window with out any problem. From there when I tried to log in to any account, the system starts to freeze and again I have to start and run Windows in safe mode (which doesn't show any problem). I read it in an article to do these steps CMD->sfc/scannow chkdsk after that only my system works normally, even now I can't watch any videos on the net otherwise it starts freezing( I can watch downloaded videos in computer without a problem) and have to do the whole process once more (which takes a lot of time). while running sfc/scannow its showed results that some of the files are corrupted and it could not be repaired. Can this be the cause for freezing of my computer while running Flash videos? or is it a hardware related problem? What different steps do I have to take to correct those corrupt files? System restore works only some times.

    Read the article

  • How do I find the cause for a huge difference in performance between two identical Ubuntu servers?

    - by the.duckman
    I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load. One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server. So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance. I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference. I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference. I am lost. What can I do, to figure out, what is going on?

    Read the article

  • ASP.Net Session Timing Out Rapidly

    - by Zac
    We have an ASP.Net 3.5 website running on Windows Server 2008 with IIS7. The session timeout period for this site is configured to be 20 minutes - however, it is currently lasting for between 40 and 50 seconds. After researching the problem we investigated several configuration values which could be involved in the timeout period but none of them are set to less than 20 minutes. The areas we look are as follows: web.config system.web/sessionState element (20 minutes). web.config system.web/authentication/forms element (not present, defaults to 30 minutes). Sites/{website}/ASP/Session Properties/Time-out (20 minutes). Application Pools/{appPool}/Advanced Settings/Process Model/Idle Time-out (20 minutes). We've also noted that the CPU is staying around 0% and that RAM usage is flat-lining around 1.07 GB (of 8 GB available) - so there is no performance-based reason for IIS to be recycling the Application Pool as far as we can tell. Are there any settings we've overlooked which could cause the session timeouts to be expiring so quickly? EDIT A couple of additional points: This is not occurring in development, only on the server. The session is not sliding (i.e. if we refresh the page a few times it still times out approximately 40 - 50 seconds after the session was created.

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >