Search Results

Search found 6395 results on 256 pages for 'weird behaviour'.

Page 182/256 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • How to turn off Excel "Header Row" without losing data in it?

    - by Ken
    I've been sent an Excel spreadsheet with a weird first row. Some of the cells say "Column1", "Column2", etc., but I can't delete their contents. If I select the cell and hit backspace, it goes blank, but when I press return, it goes right back to saying "Column1". I found another answer here that suggested this could be caused by "Cell validation", but the validation window says "Any value", and also "show alert" (and I'm not seeing an alert), so I don't think that's it. The first row is white text on a blue background, if that means anything. The spreadsheet was sent to me in XLSX format, but I tried resaving as XLS and opening that, and it seems to make no difference. This is with the "ribbon" version of Excel (they got rid of the Help menu so I don't know how to see what version number it is!). Thanks! Update: The Excel online help says to use ribbon Home tab - Cells - Delete - ... to delete cells. When I select anything on the first row, this pop-up menu is dimmed. So maybe Excel doesn't think row 1 consists of "cells"? Though I don't know what else it would call them. Update 2: I found it, kind of. If I click the "Design" tab in the ribbon, then uncheck "Header Row", then first row becomes a normal row of cells again. Unfortunately, the contents disappear entirely. I want to delete a few cells, not all 50+! And if I copy the first row before turning off "Header Row", it disappears from the clipboard when I uncheck that. So I kind of know what mode it's stuck in, but not a good way out of it.

    Read the article

  • Wireless card on HP laptop not working

    - by D. Strout
    I just bought an HP Envy m6-1125dx online from Best Buy. When I got it home and started it up, the wireless card did not work well - at all. I could connect, but any real usage would cause the connection to start dropping every 30 seconds or so, and it would be really slow. Taking another look at the reviews on the Best Buy site, it seems only a few others had this problem, so I took it to my local Best Buy and exchanged it for another unit. Got it home again and the card had the same issues. Which leads to my dilemma. First: does this model have several different cards that it could come with? Mine is a Ralink RT5390R (on both units I received). If it does, then I can keep exchanging until I get a unit with a different card. I wouldn't ask this, except it seems weird that only a few people mentioned this issue, so I thought that might be one possibility. I looked in to replacing the card with a different one myself, but it seems that HP blocks certain wireless cards. However, some people reported success in replacing the card, and this site said it was only an issue on "older HP computer[s]". Can anyone confirm this? Finally, if that fails/will not work, does anyone know what I can get through Best Buy? I am concerned that they will not put any different card than the Ralink, and after two of those, I don't want that. Can I ask Best Buy support to use a different card? Can they even get another card from HP? I guess the base question is: should I attempt to replace the card myself (two days via Amazon to get a new card), should I try to get the laptop repaired through Best Buy (two - four weeks), should I go for a different model laptop from Best Buy, or should I try a different unit of the same model (three's the charm?).

    Read the article

  • Android webbrowser returns code 500 for webpage on Nginx webserver

    - by Paxxil
    Hey! I've come to a very weird behavior of a web browser on android mobile phone (I've tried HTC Wildfire and HTC Desire phones). I have a web server with Nginx v0.8.54. When i try to open a web page on the phone it shows me error: The requested item could not be loaded! (Status code: 500) BUT it only happens when I am requesting page through Mobile network. On Wifi it works just fine .... but there is more .... if I stop Nginx and start Apache web server it works just fine on both Mobile network and wifi. I've also tried other mobile network and it is the same behavior. Some server stats: Firewall is OFF Selinux is OFF the web page (using Nginx web server) opens normally on any other browser (IE, FF, Opera, Chrome, Safari) on the laptop or PC Nothing in nginx error.log This is the only entry in access.log when the page is requested: xxx.xxx.xxx.xxx - - [17/Mar/2011:11:19:49 -0500] 200 "GET / HTTP/1.1" 27405 "-" "Mozilla/5.0 (Linux; U; Android 2.2; en-gb; Desire_A8181 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "-" index.html has only "Hello World" string in it. There is no fishy javascript or anything else. .... but there is even more.... if i open the same page on another server, with the same Nginx build, with the same server and web server configuration.... it opens just fine. if anyone has any idea on what may be going on, i would really appreciate it if you let me know. Thanks! EDIT: i forgot to mention that page opens OK on Iphone and Nokia

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Why can't I bind to 127.0.0.1 on Mac OS X?

    - by Noah Lavine
    Hello, I'm trying to set up a simple web server on Mac OS X, and I keep getting an error when I run bind. Here's what I'm running (this transcript uses GNU Guile, but just as a convenient interface to posix). (define addr (inet-aton "127.0.0.1")) ; get internal representation of 127.0.0.1 (define sockaddr (make-socket-address AF_INET addr 8080)) ; make a struct sockaddr (define sock (socket PF_INET SOCK_STREAM 0)) ; make a socket (bind sock sockaddr) ; bind the socket to the address That gives me the error In procedure bind: can't assign requested address. So I tried it again allowing any address. (define anyaddr (make-socket-address AF_INET INADDR_ANY 8080)) ; allow any address (bind sock anyaddr) And that works fine. But it's weird, because ifconfig lo0 says lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 So the loopback device is assigned to 127.0.0.1. So my question is, why can't I bind to that address? Thanks. Update: the output of route get 127.0.0.1 is route to: localhost destination: localhost interface: lo0 flags: <UP,HOST,DONE,LOCAL> recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire 49152 49152 0 0 0 0 16384 0

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • Firefox will not remember local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • New Static Website with Hosted DNS alternating 502, 503 and Page Does Not Exist Errors

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • add_header directives in location overwriting add_header directives in server

    - by user64204
    Using nginx 1.2.1 I am able to add multiple headers using add_header as follows: server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name1: Value1 Name2: Value2 However I soon as I use the add_header directive inside location, the other add_header directives under server are ignored server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { add_header Name3 Value3; <=== HERE add_header Name4 Value4; <=== HERE echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name3: Value3 Name4: Value4 The documentation says that both server and location are valid context and doesn't state that using add_header in one prevents using it in the other. Q1: Do you know if this is a bug or the intended behaviour and why? Q2: Do you see other options to get this fixed than using the HttpHeadersMoreModule module?

    Read the article

  • Running Visual Studio 2010 in a University Campus

    - by Woondows
    We have just installed Windows 7 Enterprise x64 in one of our computer labs being used by students for programming. However, when we installed Visual Studio 2010 Ultimate on the machines, we found that to even launch the application (devenv.exe), required the student to enter the administrator password (the usual UAC prompt). Of course, we could just turn off UAC, but that would defeat the purpose of having it in Windows 7. On the other hand, we cannot really give the students local administrator privilege, as we are concerned that they will do some malicious stuff on the computers. Previously when we used Windows XP Professional running Visual Studio 2005, we had no problems. Kindly advise if there's any workaround for this. EDIT: Thanks for the answer guys. Mayank, your links may work for Visual Studio .Net, but it doesn't seem to work for Visual Studio 2010. Ryan, Tieson, I'm intrigued that you guys managed to get it working easily. FYI I don't manage the Group Policies, but I can get them changed if necessary. Any particular GP that I should be looking at? Suggestions to how to troubleshoot further why UAC is being invoked? At least now I know for sure that this is not supposed to be the default behaviour for Visual Studio 2010 so I'm going to keep digging for a solution. Will try running Procmon and see if i can find something..

    Read the article

  • GtkView GtkSpell and myspell backend

    - by justadreamer
    Hi I have an IM client pidgin. It is launched under locale ru_RU.UTF-8 Then when I type messages in GtkView widget it highlights the misspelled words. However it uses a GtkSpell which uses enchant, which uses myspell backend (I provided it with symlink to openoffice dictionaries folder /usr/share/enchant/myspell -- /usr/share/myspell/dict). Now the problem is that whichever language I use - it still uses ru_RU language to select the dictionary, so all english text is being underlined as a misspelled text. When I switch locale to en_US and then launch pidgin under it - then all russian text becomes misspelled. I don't like this behaviour as I use pidgin to chat in both languages. Is there a way to somehow setup enchant/myspell so that it searches in both dictionaries en_US and ru_RU independent of the locale I launched pidgin (GtkView) in ? I have Debian lenny/5.0.1 distro. enchant version 1.5.0. /usr/share/enchant/enchant.ordering looks like this: *:myspell meaning that for all languages backend is myspell myspell dictionary.lst file looks like this: DICT en GB en_GB DICT en US en_US THES en US th_en_US_v2 THES en GB th_en_US_v2 THES ru RU th_ru_RU_v2 DICT ru RU ru_RU DICT uk UA uk DICT ru RU en_US

    Read the article

  • SQL SERVER 2005 with Windows 7 Problems

    - by azamsharp
    First of all I restored the database from other server and now all the stored procedures are named as [azamsharp].[usp_getlatestposts]. I think [azamsharp] is prefixed since it was the user on the original server. Now, on my local machine this does not run. I don't want the [azamsharp] prefix with all the stored procedures. Also, when I right click on the Sproc I cannot even see the properties option. I am running the SQL SERVER 2005 on Windows 7. UPDATE: The weird thing is that if I access the production database from my machine I can see the properties option. So, there is really something wrong with Windows 7 security. UPDATE 2: When I ran the orphan users stored procedure it showed two users "azamsharp" and "dbo1". I fixed the "azamsharp" user but "dbo1" is not getting fixed. When I run the following script: exec sp_change_users_login 'update_one', 'dbo1', 'dbo1' I get the following error: Msg 15291, Level 16, State 1, Procedure sp_change_users_login, Line 131 Terminating this procedure. The Login name 'dbo1' is absent or invalid.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • hp DL380 G4 won't boot with disk plugged into front USB

    - by Kev
    We outgrew a few older external USB backup drives, and purchased WD My Passport 1 TB USB 3.0 drives to replace them. When they are plugged into the front of our G4, it will blink forever after the BIOS (which is current, BTW) and never boot, even though the USB disks are not "bootable" per se. Our old drives did not exhibit this behaviour (so I don't think it's this type of issue that I've read about other servers.) The old drives were USB 2.0, but this shouldn't make a difference, AFAICT--the specs say all of the G4's USB ports are the same, 2.0, anyway, so I'm not sure how one port would handle a USB 3.0 device better than another. If we plug the new drives in one of the back slots, it boots fine. What's the cause? My concern is that the front USB port, and possibly the motherboard, might be starting to die. (We are experiencing other strange issues with them, or were initially, like intermittent file permissions errors despite wide-open ACL on these local drives, but some serverfault users have me convinced they may be coincidental software/security related issues.)

    Read the article

  • smartctl short test doesn't seem to complete

    - by Cédric COPY
    I am working on project which involve automated HDD testing through smartctl. The station is working fine on most product, but I have two specific products that fail the smartctl test. Those two product are both WD product (WD2500BUDT series) Smartctl behaviour is quite strange, in fact the test is launched without any problem, i wait about 2min (test length), and when i check the smartctl, i have got no result at all. It's like I hadn't launched any test (no fail, no success in smartctl result). No error return on command, nothing in syslog, .. As i said before, the test is working for other product, thousands products worked well with this test. The main smartctl command used are : smarctl -t shortest /dev/sdX #Launch test smartctl -l selftest /dev/sdX #Look at test result I have tried to use: smartctl -s on /dev/sdX or smartctl -o on /dev/sdX But doesn't change anything. The system is using Debian 6.0, smartctl v5.40 (rev 3124) x86_64, HDD are plug through SATA to PCI controller. I have 4 HDD connected at a time. Well if anyone has some hints to give with this problem, because I have no idea how can i fix this. Thanks in advance. PS: Not sure if it was a serverfault topic, sorry if i was wrong!

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Redirect 301 fails with a path as destination

    - by Martijn Heemels
    I'm using a large number of Redirect 301's which are suddenly failing on a new webserver. We're in pre-production tests on the new webserver, prior to migrating the sites, but some sites are failing with 500 Internal Server Error. The content, both databases and files, are mirrored from the old to the new server, so we can test if all sites work properly. I traced this problem to mod_alias' Redirect statement, which is used from .htaccess to redirect visitors and search engines from old content to new pages. Apparently the Apache server requires the destination to be a full url, including protocol and hostname. Redirect 301 /directory/ /target/ # Not Valid Redirect 301 /main.html / # Not Valid Redirect 301 /directory/ http://www.example.com/target/ # Valid Redirect 301 /main.html http://www.example.com/ # Valid This contradicts the Apache documentation for Apache 2.2, which states: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. Of course I verified that we're using Apache 2.2 on both the old and the new server. The old server is a Gentoo box with Apache 2.2.11, while the new one is a RHEL 5 box with Apache 2.2.3. The workaround would be to change all paths to full URL's, or to convert the statements to mod_rewrite rules, but I'd prefer the documented behaviour. What are your experiences?

    Read the article

  • Windows 7 BSOD when changing power plan

    - by dd5
    i have a strange problem. When i want to change the power plan on my laptop from High performance to Balanced, Windows freezes and i get bsod. The power plan settings are all default. Laptop specs: - Intel Core i3 330M/350M - Intel® HM55 Express Chipset - DDR3 1066 MHz SDRAM 8GB - ATI Mobility™ Radeon HD5730 1GB DDR3 VRAM - Intel SSD330 128gb - Windows 7 Home premium I've searched the internets but couldnt find a similar issue. BSOD first started when i installed this SSD and stopped when i've updated the chipset controller driver then started again yesterday when i wanted to change the power settings plan.Minidump file here. Any help with this weird issue appriciated, thanks. Edit: - i've ran Memory diagnostic tool, - Intel SSD diagnostics - and updated the firmware to 3.2.1. Non of these steps worked or shown signs of errors - but still got BSOD when changing power plan settings. After analizing the dump file via osronline.com here a first few lines: CRITICAL_OBJECT_TERMINATION (f4) A process or thread crucial to system operation has unexpectedly exited or been terminated. Several processes and threads are necessary for the operation of the system; when they are terminated (for any reason), the system can no longer function. Arguments: Arg1: 0000000000000003, Process Arg2: fffffa8008661b30, Terminating object Arg3: fffffa8008661e10, Process image file name Arg4: fffff800033de270, Explanatory message (ascii) -- Solution -- Provided by Vinayak: After installing the Intel Rapid storage Technology from MajorGeeks, i didn't experience a BSOD since, thank you :)

    Read the article

  • Malicious content on server - next steps advice [closed]

    - by Under435
    Possible Duplicate: My server's been hacked EMERGENCY I just got an e-mail from my hosting company that they got a report of malicious content being hosted on my vps. I was unaware of this and started looking into it. I discovered a file called /var/www/mysite.com/osc.htm. Soon after I discovered some weird php files wp-includes.php and ndlist.php both recognized as being PHP/WebShell.A.1 virus. I removed all these files but I'm unsure of what to do next. Can anyone help me analyze the output below of sudo netstat -A inet -p -e and give advice on what's best to do next. Thanks very much in advance Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 localhost.localdo:mysql localhost.localdo:37495 TIME_WAIT root 0 - tcp 0 1 mysite.com:50524 xnacreators.net:smtp SYN_SENT Debian-exim 69746 25848/exim4 tcp 0 0 mysite.com:www tha165.thehealtha:37065 TIME_WAIT root 0 - tcp 0 0 localhost.localdo:37494 localhost.localdo:mysql TIME_WAIT root 0 - udp 0 0 mysite.com:59447 merlin.ensma.fr:ntp ESTABLISHED ntpd 3769 2522/ntpd udp 0 0 mysite.com:36432 beast.syus.org:ntp ESTABLISHED ntpd 4357 2523/ntpd udp 0 0 mysite.com:48212 formularfetischiste:ntp ESTABLISHED ntpd 3768 2522/ntpd udp 0 0 mysite.com:46690 formularfetischiste:ntp ESTABLISHED ntpd 4354 2523/ntpd udp 0 0 mysite.com:35009 stratum-2-core-a.qu:ntp ESTABLISHED ntpd 4356 2523/ntpd udp 0 0 mysite.com:58702 stratum-2-core-a.qu:ntp ESTABLISHED ntpd 3770 2522/ntpd udp 0 0 mysite.com:49583 merlin.ensma.fr:ntp ESTABLISHED ntpd 4355 2523/ntpd udp 0 0 mysite.com:56290 beast.syus.org:ntp ESTABLISHED ntpd 3771 2522/ntpd

    Read the article

  • Connect to Apache times out randomly

    - by Amadan
    We are trying to set up an Apache server on a remote machine, but we experience strange behaviour. Checking with telnet remote.machine 80, one of these things happen randomly: Connect and serve content normally (no delay) Connect after a long pause Connect normally, then time out without response Timeout on connect Once connected, the request seems to be processed normally. These things do not occur if I connect from that machine directly to localhost 80. The Apache is dedicated, as is the server it runs on (runs only this one application, no-one else is using it for anything else). I am not an administrator of the remote site, and I do not know the network architecture over there, but apparently it's firewalled: (HTTP port is open, SSH port is IP-restricted, most others are closed). If there was any one pattern, I might have some ideas, but this variety of symptoms baffles me. Any ideas as to what could be causing this? Apache is 2.2; Server version is: Linux version 2.6.9-22.ELsmp ([email protected]) (gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)) #1 SMP Mon Sep 19 18:32:14 EDT 2005

    Read the article

  • Bouncing between a 502 and 503 error

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • Having Trouble Ripping Some CD's

    - by James
    Hi, When I buy CD's I tend to rip them to FLAC right away. When ripping I use Foobar2000 or Exact Audio Copy and enable secure ripping which uses error correction. Recently I bought a 2 CD compilation album brand new but when I tried to rip the second CD on my laptop using Foobar2000 it struggled with the last 2 tracks and was unable to finish. EAC was also unable to get an accurate rip and reports read errors. Ripping in fast mode results in audible errors in the output track. I have tried another computer and having similar problems. I cannot see any damage to the disc and it has not been dropped or anything. The weird thing is that I had similar problems with a different album and different PC a while back. This other CD was a compilation disk so it was also right up to the CD capacity limit and again it was the last few tracks that would not rip. Dozens of other discs have ripped fine So I am wondering if the CD is simply defective, or whether it is something else. How common are defective CD's? Do some CD drives struggle with CD's of this capacity? Or Is this some kind of copy protection? I'm thinking of asking Amazon for a replacement but it would be annoying if I get the same problem again.

    Read the article

  • Sharing a folder with Nautilus and NTFS external drive gets errors

    - by TheLQ
    I am trying to share a folder in Lubuntu over a network that's on an external NTFS drive. Due to the system that I have (rotating backup disks) this is probably the second time that the drive would of been mounted. Its manually mounted with a simple (for example) mount /dev/sdb1 /media/BACKUP On an internal NTFS disk I have successfully setup a network share and can access it. However on the external disk I can't from any other Windows computer. When setting up the share Nautilus said that it needs to change the other's permissions to allow for other users to write. However afterwords its still blank. Changing it to Read and Write just changes back to blank. Chowning the entire /media folder recursively and trying didn't work. Running PCManFM as root and changing didn't work. Adding "public=yes" to smb.conf and restarting didn't work. I'm out of idea's on what to do. What's weird is that it worked just fine on an internal NTFS disk, so why not the external one? Any solutions need to be able to managed inside of a gui (preferably Nautilus) as the person managing the machine isn't as tech savvy. Thanks

    Read the article

  • Getting 0xc00000e windows 7 "Boot device inaccessible" after random crashes

    - by Dynde
    I've been having some weird random crashes that I can't seem to locate, and I'm unsure if it's windows or hardware related. It's a brand new computer and very powerful. I've run into a couple of these random crashes, now I don't know what causes them, as it happens during the night, when I'm sleeping. When I wake up, all I see is a boot manager screen that says Exception: 0xc00000e "Boot device inaccessible". A simple restart doesn't fix the problem - it seems to struggle locating my primary hdd - but a complete shutdown works, it'll just fly straight into windows again. The event viewer doesn't tell me much. The most reason incident just gives me this: "The previous system shutdown at 08:55:44 on ?11-?12-?2011 was unexpected." And also a kernel power event: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. and I can see only two application event entries around that time at 8.47 (about 8 minutes prior to the crash): The Windows Modules Installer service entered the running state. The WinHTTP Web Proxy Auto-Discovery Service service entered the running state. Can anyone tell me anything about this, or direct me to a forum or something that might know what's wrong? I can supply the extra details of the events too if needed. The hdd is an SSD - could that have anything to do with it? I ran a few diagnostics and memory and hdd should be okay - at least the diagnostics report is clean. Is it a faulty drive?

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >