Search Results

Search found 9661 results on 387 pages for 'ki lya online fr'.

Page 307/387 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • How do I find out the Mac Address Firefox submits for Geolocation?

    - by Jim McKeeth
    I moved, but have the same router and cable box. Now when I visit Google Maps it still shows my old location. I went to the SkyhookWireless update page to update my Mac Address with my new location. The process is you put in your Mac address and it shows you the current location, and then you adjust that location and submit it. When I got the Mac Address from the status page on my router Skyhook reported that location as in Pakistan, which isn't even the correct continent (as my old or new address). I tried every other Mac address I could come up with: The cable router, the Wireless router's other Mac address, my PC's Mac address, etc. and none of them reported any location on the Skyhook page. So I am guessing that there is another Mac address, or some other bit of information that is used by Firefox when it reports to Google Maps my current location. Now that you have the back story, how do I find the Mac address or whatever information it is that Firefox (or other browsers) use to determine my Geolocation? Everything I have read online is rather vague. The next option I am considering is hooking a logging proxy onto Firefox and seeing what data it sends, but I'd rather find an easier method. Related: How do I update the geo location of my house?

    Read the article

  • How can I calculate power consumption of my PC in Watt?

    - by Jitendra vyas
    How can I calculate power consumption of my PC in Watt, to prove my House owner ( I live on rent) , my PC doesn't consume much power? He blames me for Huge power bills even he too use Fridge, A.C. etc and his son watch the TV all the time. We both share one Power meter so for bill we pay 50%-50% but He is saying I use PC all the time even night i keep on for downloading. I just want to calculate power consumption of my PC then will calculate monthly expense of unit as per my City's per unit price for power. I've Windows: Microsoft Windows XP Professional 5.1.2600 Service Pack 3 Memory (RAM): 960 MB CPU Info: AMD Sempron(tm) Processor 2500+ CPU Speed: 1399.0 MHz Sound card: Vinyl AC'97 Audio (WAVE) Display Adapters: VIA/S3G UniChrome Pro IGP | NetMeeting driver | RDPDD Chained DD Monitors: 1 - 17inch LCD - LG Screen Resolution: 1280 X 768 - 32 bit Network: Network Present Network Adapters: Bluetooth Device (Personal Area Network) #2 | WAN (PPP/SLIP) Interface CD / DVD Drives: I: ELBY CLONEDRIVE COM Ports: COM1 | COM2 | COM7 | COM8 | COM9 | COM10 LPT Ports: LPT1 Mouse: 3 Button Wheel Mouse Present Hard Disks: C: 29.3GB | D: 29.3GB | E: 97.7GB | F: 97.7GB | G: 211.9GB USB Controllers: 5 host controllers. Firewire (1394): 1 host controllers. Manufacturer: Phoenix Technologies, LTD Product Make: MS-7142 AC Power Status: OnLine BIOS Info: AT/AT COMPATIBLE | 01/18/06 | VIAK8M - 42302e31 Motherboard: MICRO-STAR INTERNATIONAL CO., LTD MS-7142 Modem: ZTE USB Modem FFFE CDMA #2

    Read the article

  • Update to Lion, Cannot boot into Bootcamp partitions, but can use in Parallels

    - by Jon Jester
    Using Snow Leopard had boot camp partitions for both XP and Windows 7. These were both accessible through Parallels 7 or through direct boot through boot camp. Each is on a separate partitioned hard drive. Upgraded to Lion, both were still accessible through Parallels, but have not been able to directly boot into either. Unfortunately is important to me to be able to boot into a least the Windows 7 partition. Have tried virtually everything I can find online. Seen similar issues, but nothing where they were usable virtually but not directly. Nothing works. reFit, correcting the master boot records in Windows with command line, have wiped the Windows 7 partition clean and reinstalled Windows 7 several times 1st using Boot Camp4 drivers then using Boot Camp3 drivers. Have tried resizing the bootcamp partitions. When booting into the Boot Camp partitions directly will go all the way to seeing the desktop before it fails, where I get a Windows error screen. I can see all the disks and their appropriate partitions both in OS X disk utility as well as the Windows installer utility.

    Read the article

  • Staying anonymous while hosting your site?

    - by jamesCroft
    I don't mean anonymous surfing. I mean hosting and having your own domain and such. The reason is that my blog is about religious/political topics which may cause me trouble in the future. This is the domain I am working on: www.james-croft.com I know that using Whois search my name can come up: http://www.networksolutions.com/whois-search/james-croft.com The solution to that, as far as I understood, is to buy a privacy package from the domain registrar. in my case it is lucky register: http://i.stack.imgur.com/uvOdc.png Also hosting is a concern. I use the same hosting service for multiple websites. My question is this: Can my hosting be tracked and be used to identify me? Also: Are there other methods of finding out my identity from either Google Adsense or Amazon affiliate programs? I couldn't find any relevant articles online. If there is anything else that is relevant, please let me know. I appreciate any response.

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • Solaris 10 zlogin logs in, logs out immediately

    - by Spelevink
    On a SPARC v445 running Solaris 10 9/10, had to rebuild rpool and reattached the three existing mirrored zpools on the other existing disks, with their zfs filesystems and NG zones intact. The zones have been configured with zonecfg -z ZONENAME create etc. ... and are now online using zoneadm -z ZONENAME attach -U then simply booting after being in installed state, but I cannot zlogin to any of the zones except one. It shows that I am logged in, then a blank line, then immediately logged out again. When I try to login using zlogin -C ZONENAME I cannot; the error message is: May 15 15:43:46 <hostname> login: open_module: stat(/usr/lib/security/pam_mkhomedir.so.1) failed: no such file or directory. May 15 15:43:46 <hostname> login: load_modules: cannot open module /usr/lib/security/pam_mkhomedir.so.1 But /usr/lib/pam_mkhomedir.so.1 does not exist, and it does not exist on my other servers, but those zones are accessible using zlogin. I can only zlogin to the zones with zlogin -S ZONENAME. What to do next? Thank you.

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

  • Loading the 'pktgen' module on Ubuntu Server

    - by StackedCrooked
    I would like to enable and use the pktgen module on Ubuntu Server. I have enabed the module by adding a line containing 'pktgen' to the /etc/modules file. After rebooting it seems that the module is successfully loaded because the directory /proc/net/pktgen exists. However when trying to run the first sample I get these errors: root@ubuntu:~# bash ./pktgen.conf-1-1 Removing all devices Adding eth4 Setting max_before_softirq 10000 Configuring /proc/net/pktgen/eth4 ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory Running... ctrl^C to stop Done It turns out the script simply unable to write a file to the /proc/net/pktgen directory. When I try this manually it fails as well: root@ubuntu:~# cd /proc/net/pktgen/ root@ubuntu:/proc/net/pktgen# touch eth4 touch: cannot touch `eth4': No such file or directory Can anyone help me make it work? I'm using Ubuntu version: 2.6.32-21-server. Fixed I apologize for keeping this post not up to date. I was able to fix it. If I remember well the cause of the error was that eth4 did not exist, or did not have the 'online' status. Anyway, it is fixed now.

    Read the article

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • Sending Adobe PDF attachments from Adobe Reader (in Outlook 2003) takes too long

    - by White Island
    I have a customer who is using Outlook 2003 (Microsoft Online Services) and Adobe reader 9+. When they send a PDF from Adobe reader to Outlook (via the Send as attachment to e-mail feature in Adobe), it freezes for 30 seconds to 5 minutes before the new e-mail pops up with the PDF attachment. I'm pretty sure the issue is on the Outlook side of things, as I've tried Adobe reader 8 and Foxit Reader with the same results (Windows XP/7 doesn't seem to make a difference, either). I tried Outlook in safe mode on the first (Win7) machine I was working on, and the e-mail attachment worked a lot faster, but when I tried to replicate the results on another machine, one wouldn't go into safe mode, the other didn't seem to show a difference. In an effort to fix the problem in Outlook normal mode, I tried disabling all add-ins, Com add-in (Office Communicator is the only one), reading pane, Word 2003 as e-mail editor... but none of these seemed to address the issue. Does anyone have any other ideas? I need to get this resolved as soon as possible, and it doesn't seem practical to make them run in safe mode. :P

    Read the article

  • Cannot start listening on a certain TCP port, but there's nothing currently listening on it

    - by John Rasch
    I have Windows Service that uses a WCF service host to listen for connections on TCP port 61000. When I try to start the service, I get the error: Service cannot be started. System.ServiceModel.AddressAlreadyInUseException: HTTP could not register URL http://+:61000/ because TCP port 61000 is being used by another application. ---> System.Net.HttpListenerException: The process cannot access the file because it is being used by another process at System.Net.HttpListener.AddAll() at System.Net.HttpListener.Start() at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() --- End of inner exception stack trace --- at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() at System.ServiceModel.Channels.TransportManager.Open(TransportChannelListener channelListener) at System.ServiceModel.Channels.TransportManagerContainer.Open(SelectTransportManagersCallback selectTransportManagerCallback) at System.ServiceModel.Channels.HttpChannelListener.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout) at... A quick netstat -a shows there is nothing listening on port 61000. I've also found several posts online that mention reserving namespaces using netstat, but the account that the service runs under has administrator privileges so that shouldn't be necessary. Any other ideas as to why I'm getting this message? This service is running on 64-bit Windows Server 2008 R2 Standard.

    Read the article

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • Networking "chokes" on Windows 7 64 bit

    - by Rohit Nair
    I've been having this problem for some months now, and I have been unable to figure out a solution, or even the cause. At random points throughout the day, my internet connectivity "freezes". I don't get disconnected from my local wireless network. My router doesn't get disconnected from the world. However, for some reason, my computer stops receiving packets. If I'm playing an MMO ( World of Warcraft, in this case, but it has happened with Eve Online as well ) all activity just freezes. If I try to browse, Opera, Firefox and IE all stall at "Waiting for google.com..." or whatever the hostname may be. Inspection with a packet sniffer seems to reveal that there are no incoming packets. Here's the interesting part. Disconnecting from my wireless network and reconnecting fixes the issue. Obviously this led me to conclude that it was a problem with my router or wireless card. However, I have tweaked all the settings on my router that I could think of, including things like QoS, AP Isolation, etc. with no change. My wireless card doesn't really have that many options, and I have uninstalled and reinstalled drivers a few times without any change. Windows Firewall on/off doesn't make a difference. Anyone have any suggestions for debugging this? It's becoming an annoyance.

    Read the article

  • WRTU54G-TM router with 3rd party firmware; Can custom firmware include stock binary portions?

    - by dlamblin
    I've been doing a lot of reading online about the Linksys WRTU54G-TM router model that I now own. It seems getting a custom firmware onto it is not a problem. But no one is talking about retaining the Voip features (yet). So far they're all disappointed that it's not a SIP machine and used GSM over IPSec. Personally I don't care about using it with non-t-mobile. If I take the original firmware, shouldn't I be able to extract it, and it's SquashFS image, and then move all of the t-mobile specific binaries for enabling the calling features over to a custom firmware installation (maybe OpenWRT)? You might ask why, and the reason is, that if I do this I could retain my calling features, which I do want, and ssh to the router and use it to run additional software, as any OpenWRT router could do. Does anyone know if this can be done, and how the firmware's binaries could be gotten at and installed correctly? Update I have found someone working on 3rd party WRTU54G-TM firmware. I am still interested in my second part of the questions, that is can't the stock firmware images be pulled apart and have the close-source, if any, binary kernel modules moved into another more flexible custom firmware?

    Read the article

  • VRF Internet Gateway Multiple External IP's 1 Internal IP to AWS

    - by user223903
    Trying to setup VRF for the first time and its not working for me even though I keep reading everything online. IP's are different to real life. I have an Internet connection which I can ping to my router in the current setup below 195.45.73.22 I have a block of ip addresses 195.45.121.0/27 I want to setup multiple VPN's to AWS so need to have multiple external ip's thus the block of IP addresses. I have setup the 2nd and 3rd IP address but can not ping them from external. Any help would be grateful. Bryan ip source-route ! ip vrf Internet rd 1:1 route-target export 1:1 route-target import 1:1 ip vrf AWSSydney1 rd 2:2 route-target export 2:2 route-target import 2:2 route-target import 1:1 ip vrf AWSSydney2 rd 3:3 route-target export 3:3 route-target import 3:3 route-target import 1:1 ip cef no ip domain lookup no ipv6 cef multilink bundle-name authenticated interface FastEthernet0/0 description Vocus Internet no ip address speed 100 full-duplex interface FastEthernet0/0.1 encapsulation dot1Q 1 native ip address 195.45.73.22 255.255.255.252 interface FastEthernet0/0.2 encapsulation dot1Q 2 ip vrf forwarding AWSSydney1 ip address 195.45.121.1 255.255.255.224 interface FastEthernet0/0.3 encapsulation dot1Q 3 ip vrf forwarding AWSSydney2 ip address 195.45.121.2 255.255.255.224 interface FastEthernet0/1 description LAN_SIDE ip address 10.0.0.5 255.255.255.0 speed 100 full-duplex no mop enabled ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 195.45.73.21 ip route vrf Internet 0.0.0.0 0.0.0.0 195.45.73.21

    Read the article

  • Outlook slow to open attachments

    - by Alistair McMillan
    When a colleague tries to open attachments in her email (Outlook 2003 talking to an Exchange 2007 server) they talk ages to open. The files are relatively small, all less than 1MB. We've tried creating a new Windows profile for the user and tried creating new Outlook profiles, however that hasn't made any difference. And we've tried accessing her account from someone else's PC, and the attachments open immediately there. The only thing that might provide a clue is that Process Monitor shows Outlook on her PC trying to write the file to a folder within the user's "Temporary Internet Files" folder with FAST I/O DISALLOWED errors. Can't find a lot of useful information on that message online though. What causes the FAST I/O DISALLOWED errors? And would that make opening attachments so incredibly slow that opening a < 1MB file can take a matter of minutes? UPDATE: Discovered that this isn't just an issue with Outlook. Other files being accessed over the network show the same FAST I/O DISALLOWED errors in Process Monitor. The problem is just more noticeable with Outlook, because although other applications take a while to open files it isn't a matter of minutes.

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • How could Google Latitude find my exact PC location with no GPS or public wifi?

    - by Mike
    I found a similar question here but I still don't get it. You see, I live in a small town and every time I check my IP location via online services or speed test websites, my location appears to be my ISP server location (which in my case is 250 miles away). But when I tried Google latitude, it pinpointed my exact location within less than 100 meters! I use Windows Vista, Google Chrome, and when I got the message that "Google is trying to locate you", I agreed just to check what the result will be. It was scary, very scary! What I've come up after reading the above link is that Google have a kind of extensive WiFi database locations. That could be understandable with the case of public and open WiFis that are used with a lot of people. Some of them might be using applications that could gather location data and somehow this information ends up in giant Google databases. From those, Google could pinpoint a WiFi location based on its MAC address along with these bits of info that have been gathered via various sources. The issue here is that my WiFi is private, I don't even broadcast my WiFi name. So how on earth did Google find my exact PC location? Please break down the answer in layman's terms as possible.

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • Ubuntu 12.04 on VMware Player loses network configuration

    - by d4ryl3
    I've been having this issue for 2 weeks now with my VMware Player-hosted Ubuntu 12.04. I only use it for my LAMP stack. I've had no issues with it before until about 2 weeks ago when it almost always (once per day at least) loses its network configuration. On boot it shows: Waiting for network configuration... Waiting up to 60 more seconds for network configuration... Booting system without full network configuration... Then when I do ifconfig -a it doesn't show an IP Address and couldn't get online. The only resolutions I've found so far was either to reinstall VMware Tools or use the VMware Player installer and choose Repair. This is frustrating to me because even when the issue was resolved after doing either of the steps I mentioned, the IP Address gets changed. Then I'd have to update the Remote Configuration of my IDE (Netbeans) and my database manager. What could possible cause this? Please help. Thank you. Additional details: I'm using a laptop with Windows 7 and connected to the office WiFi, which is unrestricted as far as I know. Thanks again.

    Read the article

  • Dell Studio 17 - turning off suddenly

    - by studiohack
    I have a Dell Studio 17 laptop, a refurbished model almost 2 years old...It is currently running Windows 7 32-bit, Home Premium. Via a clean install, it is a Vista upgrade machine...A while back, a problem started to develop while running Vista that it would suddenly just turn off. No warnings, messages, anything. It was like I had the battery out, then just unplugged it from the wall. Just like that. Over several months of this happening (or more), I've observed several things...First, it only seems to happen when I'm doing memory-intensive things, such as watching a online video full screen or running many applications in the background...Second, I can tell when it is about to "flip" as I've termed it, when the fan starts running...the computer gets really hot in places... Anyways, I'm pretty sure this is a hardware problem, because it still exists, even after a Vista-to-7 Upgrade...Is this true? Hardware vs. software? Is there anything I can do to fix this? Is it just a specific component or what? What do you recommend? Thanks!!

    Read the article

  • How to use Windows mini-dump files?

    - by ekaj
    I have a Mini-ITX Intel DH61AG mobo w/ an Intel i3 processor and 8GB of 1600MHz DDR3 RAM. Anyways, this computer has been crashing kind of frequently. It is not an OS problem, as I have used Ubuntu (and had kernel panics), Windows 7, and Windows 8 (BSODs aren't going to keep me from tinkering =p) Anyways, each of these OSes have had problems, so I ran a HDD check, and I know it is not a heat issue because I tested the processor for a few days when I first put the computer together. When I ran memtest86+, however, I got an error - so I did individual testing, and both chips came back good, did a really intense test with both of them again (took half a day), and no errors. So, I still think the problem could be RAM, but I am not sure - I tested it pretty extensively (might let it run all night again tonight)... which brings me to my point. Could someone explain to me (in simple terms if possible) how to READ the minidump files of Windows computers? I've tried before with a guide I found online, but failed miserably (can't remember guide, either =/). I'm fine with installing the software, I will probably need it sometime in the future as well. I have seen a few other posts on SU that just ask people to post minidump logs, but I feel as if that is too localized. Would someone be able to explain this? Note: If someone knows how to do this, but doesn't want to explain and is still willing to help me, this is the link for the minidump file =p Make sure to click

    Read the article

  • Choosing the right TV tuner - USB or PCI TV tuners, hardware/software, DVB? Hybrid/combo/analog?

    - by Nucleon
    Greetings, I'll start with some background information so you know what I'm trying to accomplish and then get to my question. I work at a Television station in the US and we are working on setting up an online DVR/Podcast system for all of our newscasts. So basically we would be recording every newscast in HD, encoding it to flv/h.264 for viewing in a browser on flash compatible and iphone/ipad devices, eventually migrating to WebM when it's browser compliant. This task is theoretically pretty simple as it all it involves is a TV tuner device and a program like VLC, MythTV or whatever to schedule and dump it to a file, encode it with VLC/FFMPEG and push it to the streaming server. Now to the hardware, in order to accomplish that task, should I use an internal PCI tuner or a USB 2.0 tuner? Is there a difference? The bus speeds of both are not too far apart, and is the bus speed really relevant in this case? Does it matter if the device has a hardware encoder or a software encoder? On many sites the USB was recommended for ease of set up and use, but would it overly task a processor, or is that not a concern as long as it's a decent PC (at least duo core, 6gb ram). What's the difference between the stick USB and the Box USBs? To my understanding analog is basically gone in the US, so we would want a hybrid or combo tuner correct? How do those differ from DVB? Are there any other features or concepts which I am missing which may influence the recommended product. It would be ideal if the device which could work in both Linux and a Windows environment, to my knowledge most Hauppauge are? Example 1: PCI Hauppage http://www.newegg.com/Product/Product.aspx?Item=N82E16815116033 Example 2: USB 2.0 Box http://www.newegg.com/Product/Product.aspx?Item=N82E16815116029 Example 3: USB 2.0 Stick http://www.newegg.com/Product/Product.aspx?Item=N82E16815116031 Any guidance from the Superusers would be much appreciated!

    Read the article

  • Exporting Client Data from Groupwise 6.5 to Outlook 2010 without Crashing

    - by Adam Doherty
    My employer has recently moved from Novell GroupWise 6.5 to Exchange 2010. We've imposed mailbox limits on staff but we still need to move their old messages, contacts, calendars, etc. over to Outlook 2010. Our problem however is this, utilizing the Novell MAPI client is slow within Outlook 2010 and upon exporting messages to a PST file (for later re-attachment, and offline backup purposes) crashes the GroupWise server. Connecting to the server in Outlook via IMAP to export messages to PST is faster and apparently more stable but also crashes the server. We'll be keeping our GroupWise server online internally until then end of the year but I have staff with mailboxes approaching 12 gigabytes, which is fine if we're going to move the data to offline storage (DVD set) but if I keep crashing the server every time I try to get the data I'll just be spinning my wheels. In my first attempts, I tried to move mail for a staff member with 3GB of data. The transfer lasted roughly 8 hours before crashing. I'm wondering if there is an open source solution to my problem. Paid solutions exist but we're a not-for-profit organization and have too many staff to justify the costs of per seat licenses just to migrate mail.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >