Search Results

Search found 11224 results on 449 pages for 'suggestions'.

Page 408/449 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Resizing Partitions on Live RHEL/cPanel Server

    - by Timothy R. Butler
    I've resized many partitions over the years on Linux, Windows and Mac OS X -- but always using a GUI. However, the time has come where the preset partition sizes my data center placed on my server aren't the right sizes and I need to resize a production server's disks. I could fiddle with it and probably do OK, but given that it is a production server, I wanted to get some advice about the right way to do this. I do have KVM over IP access, so if it is best to take the server offline and boot off a rescue partition, I can do that. root [/var/lib/mysql]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 2.1G 7.3G 23% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 99M 77M 18M 82% /boot /dev/sda8 884G 463G 376G 56% /home /dev/sda3 9.9G 8.0G 1.5G 85% /usr /dev/sda5 9.9G 9.1G 308M 97% /var /usr/tmpDSK 2.0G 38M 1.8G 3% /tmp As you can see /var and /usr are quite close to being full and I've actually had to symlink some logs on /usr to directories in /home to balance things out. What I would like to do is to add 6-10 GB each to /usr and /var, presumably taking the space from /home. As I think about how the disk is arranged, the best thought I've come up with is to reduce /home by 16 GB, say, and move /var to the spot freed up, then allocating /var's space to /usr. However, that would put /var at the far end of the disk, which seems less than idea, given that MySQL has all of its data on that partition. I'd love to take the space out of the closer end of /usr, but I assume that would take a very arduous (and perhaps risky) process of moving all of the data in /usr around. I seem to recall having such a process fail for me on a computer in the past. The other option might be to merge / and /usr since / is underutilized, though I'm not sure if that's a good idea. Do you have any suggestions both on the best reallocation plan and the commands to use to accomplish it? UPDATE: I should add -- here's the partition table. There's one unused partition, which, if memory serves, was the original tmp location before I created a tmp image: Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------ Unusable 1.05* sda1 Boot Primary Linux ext2 106.96* sda2 Primary Linux ext3 10737.42* sda3 Primary Linux ext3 10737.42* sda5 NC Logical Linux ext3 10738.47* sda6 NC Logical Linux swap / Solaris 2148.54* sda7 NC Logical Linux ext3 1074.80* sda8 NC Logical Linux ext3 964098.53*

    Read the article

  • Setting up a WPA-PSK network card to connect to a WPA2 network

    - by mattshepherd
    I'm currently doing a spare-parts build to put a media computer in the living room, and having a devil of a time getting my Rosewill RNX-6300 wireless card to connect to my network. I'm trying to set it up using Windows as opposed to the proprietary Rosewill software -- the Rosewill software is a little over my head. It can find the network fine, but when I try to connect, I don't get the password prompt -- it moves straight to "validating identity," scans, and then says "Windows was not able to find a certificate to log you on to the wireless network Foo." The maddening thing is that the card was working fine a week ago, in the same box, using the same OS. I pulled everything out, swapped out the motherboard, and reinstalled Windows on a freshly wiped hard drive, and now I can't get it up and running again. Suggestions? I've taken several runs at it, including attempting to manually change the settings for the network to include WPA-PSK and AES and the password, and I'm a bit worried that I've totally boned everything. My router settings: ipconfig/all results from the XP box: Again, this card was working on this network a week ago. I can't figure out why I can't get it up and running now. There's no WPA2 on the card, just WPA and WPA-PSK: WPA-PSK was the only setting that would let me enter a network key. I had TKIP and AES as options there, but cipher type is AES on the router, so I chose that. (I tried TKIP later, when this didn't work, with the same results as described below.) So I set it to WPA-PSK / AES and entered my security key. It's mixed letters and numbers, 32 characters long. No joy. Still "waiting for reply" in the main screen, and "cannot find certificate" on the pop-up. And if I try again and return to the settings again, it is reset to Open/AES. It also re-enables 802.1x in the Authentication tab if I've deselected it with WPA-PSK. It also reshortens the password. I have no idea how I blundered into getting this working in the past. I am, as you can tell, far from proficient at this. It was working before, though. What am I getting wrong?

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help! edit: 1/19/2011 Well, this is truly bizarre. To my knowledge I've made no changes to either system - the laptop or the server. But today after starting up the server for the first time in a few days, and making sure that nxserver was running, I was able to connect with the nxclient from my laptop with no problems. I have a full desktop in a single window and I am able to interact with it normally. This is really weird, but the problem seems to be resolved.

    Read the article

  • Postfix: How to apply header_checks only for specific Domains?

    - by Lukas
    Basically what I want to do is rewriting the From: Header, using header_checks, but only if the mail goes to a certain domain. The problem with header_check is, that I can't check for a combination of To: and From: Headers. Now I was wondering if it was possible to use the header_checks in combination with smtpd_restriction_classes or something similar. I've found a lot information about header_checks and multiple header fields, when searching the net. All of them basically telling me, that one can't combine two header for checking. But I didn't find any information if it was possible to only do a header check if a condition (eg. mail goes to example.com) was met. Edit: While doing some more Research I've found the following article which suggests to add a Service in postfix master.cf, use a transportmap to pass mails for the Domain to that service and have a separate header_check defined with -o. The thing is that I can't get it to work... What I did so far is adding the Service to the master.cf: example unix - - n - - smtpd -o header_checks=regexp:/etc/postfix/check_headers_example Adding the followin Line to the transportmap: example.com example: Last but not least I have two regexp-files for header checks, one for the newly added service, and one to redirect answers to the rewritten domain. check_headers_example: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 Obviously if someone answers, the mail would go to nirvana, so I have the following check_headers defined in the main postfix process: /To:(.*)<(.*)@mydomain.example.com>(.*)/ REDIRECT [email protected]$2 Somehow the Transport is ignored. Any help is appreciated. Edit 2: I'm still stuck... I did try the following: smtpd_restriction_classes = header_rewrite header_rewrite = regexp:/etc/postfix/rewrite_headers_domain smtpd_recipient_restrictions = (some checks) check_recipient_access hash:/etc/postfix/rewrite_table, (more checks) In the rewrite_table the following entries exist: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 All it gets me is a NOQUEUE: reject: 451 4.3.5 Server configuration error. I couldn't find any resources on how you would do that but some people saying it wasn't possible. Edit 3: The reason I asked this question was, that we have a customer (lets say customer.com) who uses some aliases that will forward mail to a domain, let's say example.com. The mailserver at example.com does not accept any mail from an external server that come from a sender @example.com. So all mails that are written from example.com to [email protected] will be rejected in the end. An exception on example.com's mailserver is not possible. We didn't really solve this problem, but will try to work around it by using lists (mailman) instead of aliases. This is not really nice though, nor a real solution. I'd appreciate all suggestions how this could be done in a proper way.

    Read the article

  • "Error loading operating system": Win7/Vista

    - by LookitsPuck
    Hey fellas, Have this computer for about 2 years now. Originally had Vista installed, now have Windows 7 installed. Both on separate hard drives. Also have another drive used strictly for media. About a week ago, the Vista hard drive started going on its way out. Was getting problems on startup. After a few BIOS settings, I was able to get into Windows 7 and everything was fine. However, I started remembering the startup issues, so I deleted the bootup for Vista under msconfig. Didn't restart the computer at that time, though. For a few days, everything was ok. Last night I play a little poker, then hit the hay. I wake up to a good ole "Error loading operating system" on the screen. Just wonderful. Looks like the computer restarted overnight (auto updates, anyone?). So, after a big of finagling and half hearted tries, I can't get past the "Error loading operating system" screen. FWIW, in the BIOS it can see my hard drives fine. So I move on. I get my Windows 7 installation disk to try and do a repair. Go in the BIOS, change boot priority to DVD drive, and we're on our merry way. After loading from the disc, I first try jumping into the "Repair your computer" section. That opens up the System Recovery Options. However, this is where the problem comes into play. I don't see any operating systems here. Nada. What's odd though is if I click on the Load Drivers button, I can see my Windows 7 partition (C:), and can go through the files and folders without issue. What do I do at this point? I can't repair it. It seems like I can traverse the hard drive without issue when in an open dialog in the System Recovery Options, but I'm getting the good ole "Error loading computer" on bootup. Suggestions? Thanks all!!

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • PHP app breaks on Nginx, but works on Apache

    - by rizon1990
    I want to migrate a PHP application from Apache to Nginx. The problem is that the App breaks, because the routing doesn't work anymore and I'm not exactly sure how to fix it. The PHP application includes some .htaccess files and I tried to convert those to Nginx. The first one is in the document root: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ public/ [L] RewriteRule (.*) public/$1 [L] </IfModule> The second one is in /public/ <IfModule mod_rewrite.c RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php?url=$1 [PT,L] </IfModule> <IfModule !mod_rewrite.c> ErrorDocument 404 index.php </IfModule> The third and last one is: deny from all My nginx version of it looks like the following: #user nobody; worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip on; server { listen 8080; server_name localhost; root /Library/WebServer/Documents/admin; location / { index index.php; rewrite ^/$ /public/ break; rewrite ^(.*)$ /public/$1 break; } location /public { if (!-e $request_filename){ rewrite ^(.*)$ /index.php?url=$1 break; } } location /library { deny all; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { root /Library/WebServer/Documents/admin; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } The problem I face is that something that the routing is broken and just returns a 404 page instead. Hopefully someone has an idea and know how to fix it ;) Thanks EDIT I got it working with this config location /library { deny all; } location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { access_log off; rewrite ^(.*)$ /public/$1 break; } location / { rewrite ^/(.+)$ /index.php?url=$1 last; } I'm sure there are better solutions and I'm open for suggestions.

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • Core i7 c1e and speedstepping - BSOD on shutdown

    - by DeaconDesperado
    I'm having an interesting problem with my recent Core i7 Digital Audio workstation build that I am curious to see if others have encountered. First, here are the specs on the machine. ASUS P6TD Deluxe Intel X58 Socket LGA1366 MB Intel Core i7-950 3.06Ghz 8M LGA1366 CPU CORSAIR DOMINATOR 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 Western Digital Caviar Black WD5001AALS 500GB Plus a couple ASUS optical drives and a 750W Corsair PSU. Running Windows 7 x64. All this is connected to the nefarious Digi 002 firewire audio interface for use with Pro Tools. I following mostly the specs posted by many other I7 users in the digidesign community who pooled their collective knowledge in this thread. Now after completing my build, I fell victim to the "UD5 squeal" described at that forum thread. So taking the advice posted, I disabled c1e advanced halt state and Intel speed stepping (I would likely have done this anyway to maintain a stable clock, power consumption isn't really a relevant concern on this machine.) I enabled XMP to set the ram timings properly as well. What I am experiencing is a BSOD upon shutdown, but only immediately after windows fully exits and ends all processes. The error is a MACHINE_CHECK_EXCEPTION 0x000000. The funny thing is that it is extremely intermitent and only occurs if the shutdown immediately followed a period of relative idleness. It does not a generate a minidump, I suspect because windows monitoring has terminated by the time this error occurs. No damage is evident and one can simply turn off manually and the system will act as though a proper shutdown had occurred. If anything it is a annoyance, I just want to be certain it is not affecting my long term stability. I have read that the i7 950 does not like DRAM voltages past 1.65, but that they are acceptable if they are within .5 of the BLCK setting. I have tried disabling XMP and setting all timings to auto and the problem still manifests in an identical way. It is suspect that the cpu idleness preceding shutdown is the determining factor, as both c1e and speedstepping are both settings intended to modify handling of this state. Any suggestions or prior experiences would be greatly appreciated. EDIT: The behavior very closely resembles what's described in this thread: http://www.tomshardware.com/forum/12003-63-shut-problem-windows The benign nature of it of is identical. I can't seem to download the hotfix cited there however.

    Read the article

  • Too many TIME_WAIT state connections!

    - by Hamza
    I've been reading about this everywhere all day, and from what I've gathered, TIME_WAIT is a relatively harmless state. It's supposed to be harmless even when there's too many. But if they're jumping to the numbers I've been seeing for the past 24 hours, something is really wrong! [root@1 ~]# netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n 1 established) 1 Foreign 12 CLOSE_WAIT 15 LISTEN 64 LAST_ACK 201 FIN_WAIT2 334 CLOSING 605 ESTABLISHED 816 SYN_RECV 981 FIN_WAIT1 26830 TIME_WAIT That number fluctuates from 20,000 to 30,000+ (so far, the maximum I've seen it go is 32,000). What worries me is that they're all different IP addresses from all sorts of random locations. Now this is supposed to be (or was supposed to be) a DDoS attack. I know this for a fact, but I won't go into the boring details. It started out as a DDoS and it did impact my server's performance for a couple minutes. After that, everything was back to normal. My server load is normal. My internet traffic is normal. No server resource is being abused. My sites load fine. I also have IPTABLES disabled. There's an odd issue with that too. Every time I enable the firewall/iptables, my server starts experiencing packet loss. Lots of it. About 50%-60% packets are lost. It happens within an hour or within a few hours of enabling the firewall. As soon as I disable it, ping responses from all locations I test them from start clearing up and get stable again. Very strange. The TIME_WAIT state connections have been fluctuating at those numbers since yesterday. For 24 hours now, I've had that, and although it hasn't impacted performance in any way, it's disturbing enough. My current tcp_fin_timeout value is 30 seconds, from the default 60 seconds. However, that seems to not help, at all. Any ideas, suggestions? Anything at all would be appreciated, really!

    Read the article

  • opening adobe reader results in infinite explorer.exe process creation loop

    - by irrational John
    First, apologies if the answer to this is only a Google away. I tried, honest I did. But I wasn't able to find anything about this problem posted elsewhere. I'm using Adobe Reader v9.3.2 in Windows 7 Home Premium 64-bit. If you want more system details, then just request them. What happens is that when I attempt to open a PDF by clicking "Open" on it then (1) adobe reader never opens and (2) the explorer.exe program is (apparently) recursively opened. I base this on opening the Task Manager and seeing a long list of explorer.exe processes under the "Processes" tab. Usually there is only one. When I recreate this problem, the list of explorer.exe processes are at least a page or two long. (Too many to bother counting). I "correct" this problem by logging off and then logging back on. This kills all the explorer.exe tasks. Unfortunately I don't know another way to terminate them all. Now here's the curious part. This only happens when I attempt to "Open" a PDF file. If instead I use the context menu (right mouse click on the PDF) and select "Open with" and "Adobe Reader 9.3" then Adobe Reader opens the file with no problem. It seems that there is something wrong with the setting for the default open action for PDF files. However, I have been unable to fix this by changing the Windows setting. Here is what I have tried. When I open Control Panel > All Control Panel Items > Default Programs > Set Associations I do not find an entry for file type .pdf. There are only entries for .pdfxml and .pdx. When use "Open with" on a PDF file and select "Choose default program", the check box for "Always use the selected program to open this kind of file" is disabled (greyed out). I have uninstalled and reinstalled Adobe Reader but the problem persists. While obviously no lives are at stake here, this problem is annoying the frickin' heck out of me. If I forget and recreate this bug then I have to stop everything I'm doing to stop it. Any suggestions on how I might go about fixing this?

    Read the article

  • Computer experiencing slowdowns and lockups despite low cpu useage

    - by user157145
    my setup i5-2300 nvidia gtx 550 ti 6 gigs ram 600 w ocz modular psu recently reformatted and already experiencing drastic slowdown as soon as windows comes up, including repeated lockups with multiple various programs reporting that they are not responsive, then recovering after 10-30 seconds. ive checked memory and hard drive both of which come out fine. despite my plethura of worthless antiviral software im forced to assume that my illicit downloading practices have lead me into some comp trouble that i cant seem to determine. i have used ccleaner, search and destroy and malware bytes, all of which have found nothing to indicate what is causing this massive slowdown. in addition according to my resource manager my computer is operating at a load of only 30-50 percent CPU useage and 60 ram useage but taking 5-10 seconds to load files and open folders, and repeated lockups of multiple programs, especially firefox which seems to go unresponsive every 2-3 minutes. any help would be appreciated, i used a program called OTL by old timer, but cant make any sense of the results i was given. any help or suggestions would be appreciated, thank you for taking the time to read this i have avast but it didnt even find anything when i had it do a full system scan, so im thinking its clueless(also nortons, avg, and ad-aware). i also have mse but it has yet to complete a full scan it takes so long (i left it on last night but when i woke up my computer had a problem and had to restart). my hard drive has 300 gigs out of 1tb open and i already used hd tune pro, which said my harddrive was fine and its not a ssd. also im a noob at comps and only have the hd that is currently inside the computer in addition im not sure if studdering is the issue im suffering. my problem is that during my typing of these responses firefox has gone "not responsive" at least 5 times, each for times of about 5-10 seconds. when i try to control alt delete to bring up windows task manager it took 20 seconds. essentially its that my computer goes super slow at bringing up anything, or taking any action whatsoever that opens a program or file and has repeated incidents where i cant even click on whatever im trying to do because it locks up. the confusing thing about these incidents is that its right after restarting where there are minimal programs running and the computer and memory load is light.

    Read the article

  • No login prompt displayed after updating Ubuntu 10.04, broken gdm

    - by cliff
    So here's what happens: I updated my system the other day, was prompted for a reboot for the update to complete but was in the middle of working so I delayed it until after I was done. I reboot and it's broken :(. It appears to boot normally, with the following exceptions: The purple Ubuntu load screen no longer displays (though it did for the first couple of times I tried to get in). I hear the login prompt sound, but no login prompt appears. Nor is it simply "invisible" - pressing enter, typing my password, and pressing enter again do nothing. Normally my Bluetooth mouse is functional at this point, but it is not. GRUB displays recovery options for my current kernel, and for an older one (2.6.32-24). Trying to boot into .32-24 gives me an error saying "udevadm can't do something while udev is not configured". So I try solutions listed here: http://superuser.com/questions/195786/ubuntu-update-went-wrong-pc-doesnt-boot-how-can-i-repair-it Nothing I tried seemed to work, and after further Googling my hunch is that it's a problem with gdm. Please correct me if I'm wrong, I don't know all that much about how Linux/Ubuntu systems work just yet. Things I'm able to do: Boot to a live CD Ctrl-Alt-F2 after that login sound plays brings me to a console login, which I can successfully do (it's how I tried the solutions above). This works only under the current kernel. A hack I'd be willing to explore is removing the login prompt from the console, but I'd prefer to "simply" fix what's wrong. Like that guy, I need to repair the system rather than reinstall. System: Dell Inspiron 1525 Core 2 Duo Proprietary Driver for Broadcom 43xx wireless Dual-boot with Windows 7 (which is how I'm posting this, unfortunately I only have this machine and any experimenting requires constant reboots into Windows/brokenbuntu) Last package installed was Moonlight, but it appeared to install properly. Kernel: 2.6.32-25 Edit: After working with Karl's suggestions, it seems that the problem is with gdm. Error exit status 245 when attempting to sudo apt-get install --reinstall gdm, also an error processing gdm when running sudo apt-get -f install. How do I reinstall or repair gdm so that I can get back into my machine?

    Read the article

  • PCI-DSS compliance for business with only swipe terminals [migrated]

    - by rowatt
    I support the IT infrastructure for a small retail business which is now required to undergo a PCI-DSS assessment. The payment service and terminal provider (Streamline) has asked that we use Trustwave to do the PCI-DSS certification. The problem I face is that if I answer all questions and follow Trustwave's requirements to the letter, we will have to invest significantly in networking equipment to segment LANs and /or do internal vulnerability scanning, while at the same time Streamline assures me that the terminals we have (Verifone VX670-B and MagIC3 X-8) are secure, don't store any credit card information and are PCI-DSS compliant so by implication we don't need to take any action to ensure their network security. I'm looking for any suggestions as to how we can most easily meet the networking requirements for PCI-DSS. Some background on our current network setup: single wired LAN, also with WiFi turned on (though if this creates any PCI-DSS complexities we can turn it off). single Netgear ADSL router. This is the only firewall we have in place, and the firewall is out the box configuration (i.e. no DMZ, SNMP etc). Passwords have been changed though :-) a few windows PCs and 2 windows based tills, none of which ever see any credit card information at all. two swipe terminals. Until a few months ago (before we were told we had to be PCI-DSS certified) these terminals did auth/capture over the phone. Streamline suggested we moved to their IP Broadband service, which instead uses an SSL encrypted channel over the internet to do auth/capture, so we now use that service. We don't do any ecommerce or receive payments over the internet. All transactions are either cardholder present, or MOTO with details given over phone and typed direct into terminal. We're based in the UK. As I currently understand it we have three options in order to get PCI-DSS certification. segment our network so the POS terminals are isolated from all PCs, and set up internal vulnerability scanning on that network. don't segment the network, and have to do more internal scanning and have more onerous management of PCs than I think we need (for example, though the tills are Windows based, they are fully managed so I have no control over software update policies, anti virus etc). All PCs have anti virus (MSE) and windows updates automatically applied, but we don't have any centralised go back to auth/capture over phone lines. I can't imagine we are the first merchant to be in this situation. I'm looking for any recommendations a simple, cost effective way to be PCI-DSS compliant - either by doing 1 or 2 above with (hopefully) simple and inexpensive equipment/software, or any other ways if there's a better way to do this. Or... should we just go back to the digital stone age and do auth/capture over the phone, which means we don't need to do anything on our network to be PCI-DSS certified?

    Read the article

  • Event ID 9331 MSExchangeSA & Event ID 9335 MSExchangeSA

    - by George
    I get this two Exchange 2010 Global Address book related event IDs: Event ID 9331 MSExchangeSA OABGen encountered error 80004005 (internal ID 50101f1) accessing the public folder database while generating the offline address list for address list '/'. -\Default Offline Address List and Event ID 9335 MSExchangeSA OABGen encountered error 80004005 while cleaning the offline address list public folders under /o=xxxxx xxxx/cn=addrlists/cn=oabs/cn=Default Offline Address List. Please make sure the public folder database is mounted and replicas exist of the offline address list folders. No offline address lists have been generated. Please check the event log for more information. -\Default Offline Address List It is Exchange 2010 SP2 sitting on Windows 2008 enterprise edition. Essentially the issue is that the global address book is not being updated on Outlook clients. We are using Outlook 2007 and 2010. So far I have tried running the following command: Update-FileDistributionService -Identity ExchangeServer -Type "OAB" And I tried this solution as well: 1) Make sure the Microsoft Exchange System Attendant is running. It will be set to start automatically by default, but it doesn't. This is a known issue. Start this service manually. When running, you will not get an error when trying to update the GAL. 2) "Apply" any changes made to any address lists before the GAL will update Outlook properly. In Organization Configuration - Mailbox in EMC, view the properties of the Default Global Address Book in the Offline Address Book tab. In the properties window, select the Address Lists tab. This shows which address lists makes up the GAL. 3) Close the properties window and select the Address Lists tab in the Organization Configuration - Mailbox. Right-click each address list used by the Def GAL and click "Apply" (make sure the "Immediately" radio button is checked). 4) Last, go back to the Offline Address Book tab, right-click the GAL and select "Update". After a few send/receives in the Outlook clients, their Glogal Address List should update to show the latest changes. Neither one of those solutions helped. So I am not really sure what to do here. Also, I am aware of changing registry on each local computers, but it would be close to impossible as we have 8 offices in 3 different countries. Any suggestions? EDIT 7.XII.2012 @ 10.35 I forgot to mention that we did rebuild the address book and that didn't help.

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you. Thank you Anees Bakrain Only the ATI Radeon HD 4300/4500 Series adapter is displayed in the Device Manager, for that reason I have to assume that the onboard display adaptor is not active. All 40 drivers of Sarastro2 are up to date and the HDMI cable can not be the problem because both monitors displayed the boot sequence up to the moment when Windows 7 was loaded completely. This was the moment, when the Asus monitor lost its signal. Both connectors, HDMI and DVI are connected and removing the DVI connector would not solve my problem of running both monitors simultaneously. However, your suggestions shifted my seventy one year old brain into the next gear. The only question remaining is; “Why the signals to the Asus monitor stop after the sequence is complete”. The ATI Radeon HD 4300/4500 Series adapter seems to be capable of sending simultaneous HDMI and DVI signals, what is done during the boot sequence. Why do the signals change after the boot sequence is complete is the key question or der springende Punkt? Is this a correct assumption slhck?

    Read the article

  • Samba share not accessible from Win 7 - tried advice on superuser

    - by Roy Grubb
    I have an old Red Hat Linux box that I use, amongst other things, to run Samba. My Vista and remaining Win XP PC can access the p/w-protected Samba shares. I just set up a new Windows 7 64-bit Pro PC. Attempts to access the Samba shares by clicking on the Linux box's icon in 'Network' from this machine gave a Logon failure: unknown user name or bad password. message when I gave the correct credentials. So I followed the suggestions in Windows 7, connecting to Samba shares (also checked here but found LmCompatibilityLevel was already 1). This got me a little further. If click on the Linux box's icon in 'Network' from this machine I now see icons for the shared directories. But when I click on one of these, I get \\LX\share is not accessible. You might not have permission... etc. I tried making the Win 7 password the same as my Samba p/w (the user name was already the same). Same result. The Linux box does part of what I need for ecommerce - the in-house part, it's not accessible to the Internet. As my Linux Fu is weak, I have to avoid changes to the Linux box, so I'm hoping someone can tell me what to do to Win 7 to make it behave like XP and Vista when accessing this share. Help please!? Thanks Thanks for replying @Randolph. I had set 'Network security: LAN Manager authentication level' to Send LM & NTLM - use NTLMv2 session security if negotiated based on the advice in Windows 7, connecting to Samba shares and had restarted the machine, but that didn't work for me. I'll try playing with other Network security values. I have now tried the following: Network security: Allow Local System to use computer identity for NTLM: changed from Not Defined to "Enabled". Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. Network security: Restrict NTLM: Add remote server exceptions for NTLM Authentication (added LX) Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. I can't see any other Network security settings that might affect this. Any other ideas please? Thanks Roy

    Read the article

  • How should I convert a physical drive to a VHD for use with VirtualPC?

    - by RBerteig
    I have the hard disks from a PC that was happily running Windows Me until is it suffered an unknown hardware failure. The drives are intact, and can be mounted and read on other PCs. We have data backups, but there is licensed software installed that may not be possible to migrate to newer versions running on a more modern platform making the idea of just booting a virtual image attractive. Is it possible to make VHDs from the drives such that I can boot them in VirtualPC? If not VirtualPC, would it be possible in any other virtualization tool? Edit: Some more details.... The system was running Windows Me, but upgraded from Windows 95 (or possibly 98). It can't have been more than a Pentium II, but I will have to look at the motherboard to confirm that. There were no "exotic" devices installed, and nothing beyond the usual legacy stuff that would need to survive into a virtual machine. The licensed software did not have a dongle, so I won't need to worry about virtualizing a physical dongle of some kind. Licenses were probably died to the disk serial number. There were two HDs, both IDE. The boot disk is about 6GB, and the spare data disk is 12GB, but nearly empty. I have a small bias in favor of VirtualPC just because its free and I've used it successfully in the past. But this is a good excuse to revisit the state of the art. I do know from direct experience that it is possible to install and boot DOS 5.0 and Win95 in VirtualPC, but the VM extensions weren't available so the experience isn't as seamless as I would have liked. A very old DirectX game that failed miserably under XP SP2 runs really nicely on that VM, and actually plays better in a lot of ways than it did on period hardware, so that gives me hope that this is possible. Edit 2: Well, I'm closer than I was when I asked... so thanks to all for helpful suggestions and hints to what I should be trying. I used WinImage to copy the disks, and VirtualPC 2007 to attempt to boot. So far, I have it booting in safe mode, but hanging with a black screen otherwise. I strongly suspect that the copy of Artisoft Lantastic 8.0 (anyone else remember them?) that is still installed for networking with even older PCs that mostly don't exist any more is the culprit there. In my infinite free time, I will try to resolve the differences between a Safe Mode boot and a normal boot, and feel that it is likely to yield to pressure. I'd accept more than one answer if I could... this isn't as black and white a question as the one accepted answer convention assumes.

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

  • Unable to delete all partitions on flash drive using Windows 7 OS??

    - by irrational John
    Recently I purchased an ADATA C802 8GB flash drive. Since the drive was new I decided to run some of the HD Tune Pro (v4.50) performance tests on it, mostly just for the heck of it. To avoid accidently destroying data HD Tune refuses to write to a drive unless there are no partitions on the drive. If you do attempt to write to a drive with partitions, it posts the message "Writing is disabled. To enable writing please remove all partitions." As you would expect, the ADATA came formatted with a single primary FAT32 partition in the Master Boot Record. But a number of unexpected things happened when I attempted to delete that partition. The first thing I tried was to use the Windows 7 (64-bit) Disk Management tool (diskmgmt.msc) to delete the partition. It would not let me. The context menu choice to delete that volume was not available. Next I opened up a command prompt window with Admin authority and ran diskpart. Diskpart deleted the volume for me. However, when I attempted to run an HD Tune write test on the drive I still got the "Writing is disabled" message. Huh??? So I fired up a utility I have which allows viewing drives at the sector level and verified that the partition table in the Master Boot Record was empty. No partitions. Yet HD Tune still thought there were partitions on the drive? So why was I still getting the "Writing is disabled" message from HD Tune Pro? And why wouldn't the Windows 7 Disk Management tool let me change the partitions on this drive. After doing the above, I plugged the ADATA into my MacBook. I was then able to format it as either a GPT or MBR partitioned drive with no problems. I am not looking for suggestions on how to format this drive. I can do that. What I do not understand and was hoping I might get insight into is why this drive behaves so strangely under Windows 7? And BTW, what's up with HD Tune Pro? BTW, if I plug the drive I formatted on my MacBook back into my Windows 7 64-bit system I still run into road blocks with the Disk Management tool. For example, I cannot delete all the GPT partitions on the ADATA so I can convert it into an MBR drive. I following Microsoft's instructions, the instructions just do not work with this ADATA flash drive. Anyone know what's up with this? It makes no sense to me. Has something changed in Windows 7 (Vista)??

    Read the article

  • Why do some games randomly turn my screen a random solid color?

    - by Emlena.PhD
    When playing some games my computer will randomly have an error that I cannot fix without turning it off and back on again. The screen changes to one solid color, which varies (off the top of my head I can remember seeing solid green, magenta, etc..) and the sound blares a single tone. The sound sometimes briefly restores and I can still hear the game sounds and even hear and still be heard by people in my Mumble channel, but the screen doesn't right itself so I'm still blind. What's more is this happens in some games but not in others. While the game is actually running, not while I'm still in the menu. However, it does happen if I'm afk or idle but the game world is still rendering. Games where the error occurs: League of Legends World of Warcraft Trine The Sims 2 Dungeon Defenders Safe games: games where it has never occurred: Tribes: Ascend Star Wars: the Old Republic Battlefield 3 So relatively older games cause the problem while newer games do not? I cannot predict when it will happen, it just seems random. However, if it happens and I try playing the same game further after restart it does appear to occur more frequently after the first time. But if I switch to a safe game it doesn't continue happening. Both of my RAM sticks appear fine, flipped position or either one on their own and games still run, computer still boots. I would think over-heating, but then why not all games? ALso, sometimes it happens immediately after I start playing, within seconds of the 3D world booting up. I'm looking to upgrade very soon so I want to figure out what component or software is fubar and replace/repair it. Any suggestions or recommendations of tools would be helpful. Below is some system information. Dxdiag does not detect any problems. Operating System: Windows 7 Home Premium 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120305-1505) System Manufacturer: Gigabyte Technology Co., Ltd. System Model: EP45-UD3R BIOS: Award Modular BIOS v6.00PG Processor: Intel(R) Core(TM)2 Duo CPU E8500 @ 3.16GHz (2 CPUs), ~3.2GHz Memory: 4096MB RAM DirectX Version: DirectX 11 DxDiag Version: 6.01.7601.17514 64bit Unicode Graphics card name: NVIDIA GeForce GTX 285 Driver Version: 8.17.12.9610 (error has occurred w/several driver versions) Sound: I do not have a sound card, been using motherboard's built in sound)

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • How to resolve `bootpd` crashing constantly on Mac OS X 10.6.4 Snow Leopard Server?

    - by morgant
    I've got a Mac Pro running Mac OS X 10.6.4 Snow Leopard Server and it's recently started getting numerous 'kNetworkError's in Server Admin.app when viewing services. It's acting as a gateway w/NAT and has been so for quite some time. There is one glaring issue, bootpd crashes all the time with the following errors in `/var/log/system.log/: Aug 12 16:54:59 servername bootpd[3572]: server starting Aug 12 16:54:59 servername bootpd[3572]: server name servername.domain.tld Aug 12 16:54:59 servername bootpd[3572]: interface en0: ip 10.0.1.9 mask 255.255.255.0 Aug 12 16:54:59 servername bootpd[3572]: bsdpd: re-reading configuration Aug 12 16:54:59 servername bootpd[3572]: bsdpd: shadow file size will be set to 48 megabytes Aug 12 16:54:59 servername bootpd[3572]: bsdpd: age time 00:15:00 Aug 12 16:54:59 servername bootpd[3572]: [3572] detected buffer overflow Aug 12 16:54:59 servername com.apple.launchd[1] (com.apple.bootpd[3572]): Job appears to have crashed: Abort trap Aug 12 16:54:59 servername com.apple.ReportCrash.Root[3571]: 2010-08-12 16:54:59.828 ReportCrash[3571:2807] Saved crash report for bootpd[3572] version ??? (???) to /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash It is correctly configured to serve DHCP through en1 (not en0), the "LAN" port. This happens even with no hardware (even switches) connected to the "LAN" port. There are no DHCP clients listed. Oddly, the "Overview" shows 1 static map, but nothing is listed under "Static Maps" and there are no "Computers" in Open Directory. /var/db/dhcp_leases is empty. /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash is as follows: Process: bootpd [3572] Path: /usr/libexec/bootpd Identifier: bootpd Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2010-08-12 16:54:59.713 -0400 OS Version: Mac OS X Server 10.6.4 (10F569) Report Version: 6 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: __abort() called Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x00007fff803c13d6 __kill + 10 1 libSystem.B.dylib 0x00007fff80461913 __abort + 103 2 libSystem.B.dylib 0x00007fff80456157 mach_msg_receive + 0 3 libSystem.B.dylib 0x00007fff803b92cf __strncpy_chk + 14 4 bootpd 0x0000000100014e5d PLCache_read + 782 5 bootpd 0x0000000100004a3d BSDPClients_init + 68 6 bootpd 0x00000001000053b5 bsdp_init + 2396 7 bootpd 0x000000010000200b S_update_services + 1228 8 bootpd 0x0000000100002344 S_server_loop + 571 9 bootpd 0x0000000100003963 main + 1766 10 bootpd 0x0000000100000984 start + 52 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x00007fff5fbfe220 rcx: 0x00007fff5fbfe218 rdx: 0x0000000000000000 rdi: 0x0000000000000df4 rsi: 0x0000000000000006 rbp: 0x00007fff5fbfe240 rsp: 0x00007fff5fbfe218 r8: 0x0000000000000001 r9: 0x0000000100114280 r10: 0x00007fff803bd412 r11: 0xffffff80002e1680 r12: 0xffffffffffffffff r13: 0x00007fff5fbfe330 r14: 0x00007fff5fbfe33b r15: 0x00007fff7009bec0 rip: 0x00007fff803c13d6 rfl: 0x0000000000000202 cr2: 0x000000010004c000 Any thoughts or suggestions as to resolving this?

    Read the article

  • How to get physical partition name from iSCSI details on Windows?

    - by Barry Kelly
    I've got a piece of software that needs the name of a partition in \Device\Harddisk2\Partition1 style, as shown e.g. in WinObj. I want to get this partition name from details of the iSCSI connection that underlies the partition. The trouble is that disk order is not fixed - depending on what devices are connected and initialized in what order, it can move around. So suppose I have the portal name (DNS of the iSCSI target), target IQN, etc. I'd like to somehow discover which volumes in the system relate to it, in an automated fashion. I can write some PowerShell WMI queries that get somewhat close to the desired info: PS> get-wmiobject -class Win32_DiskPartition NumberOfBlocks : 204800 BootPartition : True Name : Disk #0, Partition #0 PrimaryPartition : True Size : 104857600 Index : 0 ... From the Name here, I think I can fabricate the corresponding name by adding 1 to the partition number: \Device\Harddisk0\Partition1 - Partition0 appears to be a fake partition mapping to the whole disk. But the above doesn't have enough information to map to the underlying physical device, unless I take a guess based on exact size matching. I can get some info on SCSI devices, but it's not helpful in joining things up (iSCSI target is Nexenta/Solaris COMSTAR): PS> get-wmiobject -class Win32_SCSIControllerDevice __GENUS : 2 __CLASS : Win32_SCSIControllerDevice ... Antecedent : \\COBRA\root\cimv2:Win32_SCSIController.DeviceID="ROOT\\ISCSIPRT\\0000" Dependent : \\COBRA\root\cimv2:Win32_PnPEntity.DeviceID="SCSI\\DISK&VEN_NEXENTA&PROD_COMSTAR... Similarly, I can run queries like these: PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_TargetClass PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_PersistentDevices These guys return information relating to my iSCSI target name and the GUID volume name respectively (a volume name like \\?\Volume{guid-goes-here}), but the GUID volume name is no good to me, and there doesn't appear to be a reliable correspondence between the target name and the volume that I can join on. I simply can't find an easy way of getting from an IQN (e.g. iqn.1992-01.com.example:storage:diskarrays-sn-a8675309) to physical partitions mapped from that target. The way I do it by hand? I start Disk Management, and look for a partition of the correct size, verify that its driver says NEXENTA COMSTAR, and look at the disk number. But even this is unreliable if I have multiple iSCSI volumes of the exact same size. Any suggestions?

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >