Search Results

Search found 13653 results on 547 pages for 'old school'.

Page 494/547 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Our VPS is being used as a Warez mule

    - by Mikuso
    The company I work for runs a series of ecommerce stores on a VPS. It's a WAMP stack, 50gb storage. We use an archaic piece of ecommerce software which operates almost entirely client-side. When an order is taken, it writes it to disk and then we schedule a task to download the orders once every 10 minutes. A few days ago, we ran out of disk space, which caused orders to fail to be written. I quickly hopped on to delete some old logs from the mailserver and freed up a couple of GB pretty quickly, but I wondered how we could fill up 50gb will nothing much more than logs. Turns out, we didn't. Hidden deep within the c:\System Volume Information directory, we have a stack of pirated videos, which seem to have appeared (looking at the timestamps) over the past three weeks. Porn, American Sports, Australian cooking shows. A very odd collection. Doesn't look like an individual's personal tastes - more like the VPS is being used as a mule. We have a 5-attempts and you're blocked policy on our FTP server (plus, there is no FTP account with access to that directory), and the windows user account has had it's password changed recently. The main avenues are sealed - and logs can verify that. I thought I'd watch and see if it happened again, and yes, another cooking show has appeared this morning. I am the only one to know of this problem at my company, and only one of two with access to the VPS (the other being my boss, but no - it's not him). So how is this happening? Is there a vulnerability in some of the software on the VPS? Are the VPS owners peddling warez across our rented space? (can they do this?) I don't want to delete the warez in case it is seen as a hostile action against this outside force, and they choose to retaliate. What should I do? How do I troubleshoot this? Has this happened to anyone else before?

    Read the article

  • change owner/uid of mount point upon mount

    - by Shiplu
    The scenario is like this. Bob has a computer. It crashed. Now he only has the hdd. The hdd is in ext3 format. He go to his office and told the sys admin John to mount this hdd and put the mount point in his home directory. John used the following fstab entries. # Bobs harddisk /media/TAPE4/Bobs-hdd.img /home/bob/myhdd/windows ntfs ro,loop,offset=32256 0 0 /media/TAPE4/Bobs-hdd.img /home/bob/myhdd/linux ext3 ro,loop,offset=14048810496 0 0 /media/TAPE4/Bobs-hdd.img /home/bob/myhdd/extra ntfs ro,loop,offset=28015335936 0 0 Bob was happy. He could access his old extra and windows. Specially the Documents and Settings in windows was helpful for him. But he found a problem. He is a web developer and all his websites are in linux/home/bob/public_html directory. When he tried to access that public_html directory he got permission_denied. He executed ls -lh he saw this. drwxr-xr-x 2 john john 4.0K Nov 9 2011 Desktop drwxr-xr-x 3 john john 4.0K Aug 12 2011 Documents drwxr-xr-x 3 john john 4.0K Aug 21 2011 public_html He contacted John thinking he might be mistakenly did this. But John couldn't find a way why this happend? Then one thing came into his mind file system hardly store username. They store uids. So he executed ls -ln drwxr-xr-x 2 1000 1000 4096 Nov 9 2011 Desktop drwxr-xr-x 3 1000 1000 4096 Aug 12 2011 Documents drwxr-xr-x 3 1000 1000 4096 Aug 21 2011 public_html John thinks 1000 is the first uid on a linux system. As he is the admin of the current system. He created his account first. so Johns uid was 1000. Bob also setup his private system and crated his account first. So Bobs uid was 1000 too. So thats an expected behavior. But problem remains. How can Bob access those websites in public_html?

    Read the article

  • Amazon Elastic Terms and Conditions

    - by PP
    WARNING: Have you really read Amazon's Terms and Conditions? Would anybody seriously agree to this term on Amazon's Elastic services sign up page? 6.2. Restrictions with Respect to Use of Marks. Your use of any trademarks, service marks, service or trade names, logos, and other designations of AWS and its affiliates or licensors, hereinafter "Marks", shall strictly comply with the following provisions. You may use the Marks in conjunction with the display of the AWS Content and for the purpose of indicating that your Application was created using the Services. You may use the Marks only in the form in which we make them available to you and not in any manner that disparages Amazon, its affiliates or its licensors, or that otherwise dilutes any Mark. Other than your limited right to use the Marks as provided in this Agreement, we and our licensors retain all right, title, and interest in and to the Marks. You will not at any time now or in the future challenge or assist others to challenge the validity of the Marks, or attempt to register confusingly similar trademarks, trade names, service marks or logos. You agree to follow our the Trademark Use Guidelines posted on the Amazon Web Services™ Trademark Guidelines page (the "Trademark Guidelines") as those guidelines may change from time to time. The Trademark Guidelines are incorporated herein by reference. You must immediately discontinue use of any Mark as specified by us at any time in writing. We may modify any Marks provided to you at any time, and upon notice, you will use only the modified Marks and not the old Marks. Other than as specified in this Agreement, you may not use any trademark, service mark, trade name or other business identifier of Amazon or its affiliates unless you obtain Amazon's or its affiliates' prior written consent. The foregoing prohibition includes the use of "amazon," any other trademark of AWS, Amazon or its affiliates, or variations or misspellings of any of them, in the name of an Application or in a URL to the left of the top-level domain name (e.g., ".com", ".net", "co.uk", etc.)-for example, a URL such as "amazon.mydomain.com", "amaozn.com" or "amazonauctions.net" are expressly prohibited. Any use you make of the Marks shall inure to our benefit and you hereby irrevocably assign to us all right, title and interest in the same. In addition, you agree not to misrepresent or embellish the relationship between us and you, for example by implying that we support, sponsor, endorse, or contribute money to you or your business endeavors. If you are a large company and you want to use Amazon's services you must agree that: you may not use the word "amazon" in any domain name you control (even if you are a forestry company) you may not use any word Amazon choose to trademark in any domain you control (regardless of whether the name has a different meaning/purpose in your industry) from now until forever you will never dispute any claim Amazon makes on any word you or anybody else uses Seriously, who would sign such a thing?

    Read the article

  • lacp, cicso 3550, 3560, help with configuration

    - by Flamewires
    Hey all this is a repost from a question I asked on the cisco forums but never got a useful reply. Hey I'm trying to convert the FreeBSD servers at work to dual-gig lagg links from regular gigabit links. Our production servers are on a 3560. I have a small test environment on a 3550. I have achieved fail-over, but am having troubles achieving the speed increase. All servers are running gig intel (em) cards. The configs for the servers are: BSDServer: #!/bin/sh #bring up both interfaces ifconfig em0 up media 1000baseTX mediaopt full-duplex ifconfig em1 up media 1000baseTX mediaopt full-duplex #create the lagg interface ifconfig lagg0 create #set lagg0's protocol to lacp, add both cards to the interface, #and assign it em1's ip/netmask ifconfig lagg0 laggproto lacp laggport em0 laggport em1 ***.***.***.*** netmask 255.255.255.0 The switches are configured as follows: #clear out old junk no int Po1 default int range GigabitEthernet 0/15 - 16 # config ports interface range GigabitEthernet 0/15 - 16 description lagg-test switchport duplex full speed 1000 switchport access vlan 192 spanning-tree portfast channel-group 1 mode active channel-protocol lacp **** switchport trunk encapsulation dot1q **** no shutdown exit interface Port-channel 1 description lagginterface switchport access vlan 192 exit port-channel load-balance src-mac end obviously change 1000's to 100's and GigabitEthernet to FastEthernet for the 3550's config, as that switch has 100Mbit speed ports. With this config on the 3550, I get failover and 92Mbits/sec speed on both links, simultaneously, connecting to 2 hosts.(tested with iperf) Success. However this is only with the "switchport trunk encapsulation dot1q" line. First, I do not understand why I need this, I thought it was only for connecting switches. Is there some other setting which this turns on that is actually responsible for the speed increase? Second, This config does not work on the 3560. I get failover, but not the speed increase. Speeds drop from gig/sec to 500Mbit/sec when I make 2 simultaneous connections to the server with or without the encapsulation line. I should mention that both switches are using source-mac load balancing. In my test I am using Iperf. I have the server(lagg box) setup as the server(iperf -s), and the client computers are client(iperf -c server-ip-address), so the source mac(and IP) are different for both connections. Any ideas/corrections/questions would be helpful, as the gig switches are what I actually need the lagg links on. Ask if you need more information.

    Read the article

  • Computer experiencing slowdowns and lockups despite low cpu useage

    - by user157145
    my setup i5-2300 nvidia gtx 550 ti 6 gigs ram 600 w ocz modular psu recently reformatted and already experiencing drastic slowdown as soon as windows comes up, including repeated lockups with multiple various programs reporting that they are not responsive, then recovering after 10-30 seconds. ive checked memory and hard drive both of which come out fine. despite my plethura of worthless antiviral software im forced to assume that my illicit downloading practices have lead me into some comp trouble that i cant seem to determine. i have used ccleaner, search and destroy and malware bytes, all of which have found nothing to indicate what is causing this massive slowdown. in addition according to my resource manager my computer is operating at a load of only 30-50 percent CPU useage and 60 ram useage but taking 5-10 seconds to load files and open folders, and repeated lockups of multiple programs, especially firefox which seems to go unresponsive every 2-3 minutes. any help would be appreciated, i used a program called OTL by old timer, but cant make any sense of the results i was given. any help or suggestions would be appreciated, thank you for taking the time to read this i have avast but it didnt even find anything when i had it do a full system scan, so im thinking its clueless(also nortons, avg, and ad-aware). i also have mse but it has yet to complete a full scan it takes so long (i left it on last night but when i woke up my computer had a problem and had to restart). my hard drive has 300 gigs out of 1tb open and i already used hd tune pro, which said my harddrive was fine and its not a ssd. also im a noob at comps and only have the hd that is currently inside the computer in addition im not sure if studdering is the issue im suffering. my problem is that during my typing of these responses firefox has gone "not responsive" at least 5 times, each for times of about 5-10 seconds. when i try to control alt delete to bring up windows task manager it took 20 seconds. essentially its that my computer goes super slow at bringing up anything, or taking any action whatsoever that opens a program or file and has repeated incidents where i cant even click on whatever im trying to do because it locks up. the confusing thing about these incidents is that its right after restarting where there are minimal programs running and the computer and memory load is light.

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • Samba share not accessible from Win 7 - tried advice on superuser

    - by Roy Grubb
    I have an old Red Hat Linux box that I use, amongst other things, to run Samba. My Vista and remaining Win XP PC can access the p/w-protected Samba shares. I just set up a new Windows 7 64-bit Pro PC. Attempts to access the Samba shares by clicking on the Linux box's icon in 'Network' from this machine gave a Logon failure: unknown user name or bad password. message when I gave the correct credentials. So I followed the suggestions in Windows 7, connecting to Samba shares (also checked here but found LmCompatibilityLevel was already 1). This got me a little further. If click on the Linux box's icon in 'Network' from this machine I now see icons for the shared directories. But when I click on one of these, I get \\LX\share is not accessible. You might not have permission... etc. I tried making the Win 7 password the same as my Samba p/w (the user name was already the same). Same result. The Linux box does part of what I need for ecommerce - the in-house part, it's not accessible to the Internet. As my Linux Fu is weak, I have to avoid changes to the Linux box, so I'm hoping someone can tell me what to do to Win 7 to make it behave like XP and Vista when accessing this share. Help please!? Thanks Thanks for replying @Randolph. I had set 'Network security: LAN Manager authentication level' to Send LM & NTLM - use NTLMv2 session security if negotiated based on the advice in Windows 7, connecting to Samba shares and had restarted the machine, but that didn't work for me. I'll try playing with other Network security values. I have now tried the following: Network security: Allow Local System to use computer identity for NTLM: changed from Not Defined to "Enabled". Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. Network security: Restrict NTLM: Add remote server exceptions for NTLM Authentication (added LX) Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. I can't see any other Network security settings that might affect this. Any other ideas please? Thanks Roy

    Read the article

  • what is Remote Desktop Services in Windows Server 2008 R2 all about?

    - by fejesjoco
    Seriously, I'm lost in all that sales mumbo-jumbo. Let's say I want 1 or 2 users to be able to remotely log on to a server, run Word, Visual Studio, Firefox, and whatever. Do I gain anything at all if I install Remote Desktop Services? Or do I just install Desktop Experience feature pack, enable remote desktop and voila, nobody will ever notice the difference? Here's what TechNet says about Remote Desktop Session Host: A Remote Desktop Session Host (RD Session Host) server is the server that hosts Windows-based programs or the full Windows desktop for Remote Desktop Services clients. Users can connect to an RD Session Host server to run programs, to save files, and to use network resources on that server. Users can access an RD Session Host server by using Remote Desktop Connection or by using RemoteApp. The good old simple remote desktop can also host a full Windows desktop for remote clients so that they can run programs, save files and do all that stuff. Why do they write about it like it's such a great new invention, besides that they want to sell it? RDSH doesn't seem all that different at all. What do I install when I install RDSH, since all those features are already there in Windows? What's even more confusing is that you need to take special care when you want to install applications to an RDSH so that they will be usable by many concurrent users. Why? All the modern applications install the program files in one directory, store some common settings in the ProgramData folder and the HKLM hive, and store user specific settings in the Users folder and the HKCU hive. They are designed to be usable by many users on the same machine. 2 or 2000 users can use them concurrently without any efforts. I can sign in with 2 users to a server with only remote desktop enabled, and both of us can run Word or anything without any problems, can't we? So what changes if I set RDSH to install mode, or what happens if I don't? Why is the feature to switch between install and execute mode there at all? Yes I know of some advantages in Remote Desktop Services, like there's no 2 user limit, it supports virtualization, video acceleration and stuff, it has a whole infrastructure with gateway, web access, connection broker, etc. But I don't need those, so if you take these away, how are these two technologies different? From the articles it seems like they are completely different technologies, whereas it looks to me that they are completely the same at the core, and Remote Desktop Services just adds some additional features, but doesn't reinvent anything.

    Read the article

  • A dusty server room

    - by pauska
    Here's the story.. The owners of the building we lease office space from decided to do a renovation of the exterior. This involved in some pretty heavy work at the level where our server room is, including exchanging windows wich are fit inside a concrete wall. My red alert went off when I heard that they were going to do the same thing with our server room (yes, our server room has a window. We're a small shop with 3 racks. The window is secured with steel bars.) I explicity told the contractor that they need to put up a temporarily wall between our racks and the original wall - and to make sure that the temporary wall is 100 % air and water-tight. They promised to do so. The temporary wall has a small door in it, so that workers can go in/out through the day (through our server room, wich was the only option....). On several occasions I could find the small door half-way shut while working evenings/nights. I locked the door, and thought that they would hopefully get the point soon and keep the door shut. I even gave a electrician a mouthful when I saw that he didn't close the door properly. By this point - I bet that most of you get a picture of what happened. Yes, they probably left the door open while drilling in the concrete. I present you our 4 weeks old EMC VNX: I'll even put in a little bonus, here is the APC UPS one rack further away from the temporary wall. See the nice little landing strip from my finger? What should I do? The only thing that comes to mind is to either call all our suppliers (EMC, HP, Dell, Cisco) and get them to send technicians to check out all the gear in the server room, or get some kind of certified 3rd-party consulant to check all of it. Would you run production systems on this gear? How long? Edit: I should also note that our aircondition isn't exactly enterprise-grade, given the nature of our small room. It's just a single inverter, wich have failed one time before I started working here (failed inverters usually leads to water dripping out).

    Read the article

  • How should I convert a physical drive to a VHD for use with VirtualPC?

    - by RBerteig
    I have the hard disks from a PC that was happily running Windows Me until is it suffered an unknown hardware failure. The drives are intact, and can be mounted and read on other PCs. We have data backups, but there is licensed software installed that may not be possible to migrate to newer versions running on a more modern platform making the idea of just booting a virtual image attractive. Is it possible to make VHDs from the drives such that I can boot them in VirtualPC? If not VirtualPC, would it be possible in any other virtualization tool? Edit: Some more details.... The system was running Windows Me, but upgraded from Windows 95 (or possibly 98). It can't have been more than a Pentium II, but I will have to look at the motherboard to confirm that. There were no "exotic" devices installed, and nothing beyond the usual legacy stuff that would need to survive into a virtual machine. The licensed software did not have a dongle, so I won't need to worry about virtualizing a physical dongle of some kind. Licenses were probably died to the disk serial number. There were two HDs, both IDE. The boot disk is about 6GB, and the spare data disk is 12GB, but nearly empty. I have a small bias in favor of VirtualPC just because its free and I've used it successfully in the past. But this is a good excuse to revisit the state of the art. I do know from direct experience that it is possible to install and boot DOS 5.0 and Win95 in VirtualPC, but the VM extensions weren't available so the experience isn't as seamless as I would have liked. A very old DirectX game that failed miserably under XP SP2 runs really nicely on that VM, and actually plays better in a lot of ways than it did on period hardware, so that gives me hope that this is possible. Edit 2: Well, I'm closer than I was when I asked... so thanks to all for helpful suggestions and hints to what I should be trying. I used WinImage to copy the disks, and VirtualPC 2007 to attempt to boot. So far, I have it booting in safe mode, but hanging with a black screen otherwise. I strongly suspect that the copy of Artisoft Lantastic 8.0 (anyone else remember them?) that is still installed for networking with even older PCs that mostly don't exist any more is the culprit there. In my infinite free time, I will try to resolve the differences between a Safe Mode boot and a normal boot, and feel that it is likely to yield to pressure. I'd accept more than one answer if I could... this isn't as black and white a question as the one accepted answer convention assumes.

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

  • MBPro, mid 2010 can't see Dlink DIR655 signal after sleep etc

    - by user88114
    This is my son's MBP 7,1 running Snow Leopard 10.6.7. Router signal is fine since iPad, Wintel on same table 20 feet from router are fine. the MBP however frequently wakes and fails to find the internet. iStumbler can see 1 neighbours hub and my garden hub are there but can't get to the normal DIR655 wifi... no ping no en0 or en1 device seems to exist. Airport off and on does not help. He just resets router and it all works but this does not please me! I must admit the winter sometimes seems to loose connect too, but less so. The DIR655 (hardware rev A3) is on the original EU firmware 1.10, I'm cautious about jumping to latest 1.31EU since no downgrade seems to be possible and that feels a bit risky as so much is set up and working fine. If I use the DIR655 admin web and release the lease the MBP has then wake it all worked OK. So I suspect lease timing/locking issue but unsure how to check up, plus why iStumbler seems to say the network is not visible at all when I sit on the iPad right next to it just fine.. I do not think there are any channel overlaps and we also have RFquiet DECT phones (Orchid) that are silent until lifted or called. Anyway signals all show low interference and high throughput except for this failure to connect. Just walked the MBP to the garden office and iStumbler now sees the more distant DIR655 signal although it will not connect to it (does not show under Sys Prefs NetNetwork names) even after airport off & on... It also refuses to connect to my garden network (an old Belkin acting as AP wired to DIR655), the signal it can see and even net name in Sys Prefs NetNetwork names (2 mins later):NOW both names ARE visible, but both fail to accept the correct WPA2 password and keep asking again after failing to connect. IT ALL MAKES NO SENSE TO ME. Just revoked the lease for the MBP on DIR655 and no changes although this seemed to help MBP wake into connection 1 hour ago. OK a bit of walking about to report. Carried MBP across garden towards DIR655, a few other wifis show up on iStumbler, low signals all channel 1. Right next to DIR655 but iStumbler not showing it, although most other wifi's have gone. I'd say iStubler is suffering timeouts&hangs but hard to be sure. Lots of attempts to Airport on/off, join other etc and suddenly I get to connect, get given new IP (I revoked), can browse. Walk away, connection drops quite soon at 30 feet then reconnected briefly then died again. MUST ATTEND ELSEWHERE FOR A BIT...

    Read the article

  • Windows 7 Administrator HomeUsers Account

    - by Charles Carrington
    I'm trying to login to my Windows 7 PC from another PC so that I can transfer files to the Windows 7 PC. I've just installed Visual Studio 2008 on my new PC, and I wan't to transfer all of my work from my old machine to my new one. When I first set up a user on the Windows 7 PC after a reformat, the account created had a Group field that read "HomeUsers; Administrators" when viewing it from the User Accounts screen. You get to this screen by typing "netplwiz" in the search field of the Start Menu. I changed the Group of this account to Administrators before I realized that it was assigned to two Groups -- "HomeUsers; Administrators" as I mentioned above. I was trying to make sure that it was an Administrator account so I didn't have to type in a password everytime I wanted to install software. I can use this computer normally without being asked for an administrator password all the time when I want to install new software, but I can't log in to this PC from another PC because I don't have an account that has a Group of "HomeUsers". I should have left the account alone; everything would've been fine. But there doesn't seem to be a way to assign it to two groups after the initial assignment that take place automatically when you are setting up your computer for the first time. If you assign "HomeUsers" to the account, the Group field on the User Accounts screen will just read "HomeUsers". If you assign "Administrators" to the account, the Group field on the User Accounts screen will just read "Administrators". There's no way to make it read "HomeUsers; Administrators" again. If you don't have at least one account that is a "HomeUsers" account, you cannot log in to the PC from another PC on the network. If you don't have an account that is an "Administrators" account, you cannot install software on your machine without being asked for an Administrator password all the time, which is very annoying. I want an account on my Windows 7 PC that I can use to install software without being asked for a password AND that I can log into from another PC on the network to transfer files. If I could make the Group field read "HomeUsers; Administrators" of my primary account on the Windows 7 PC when I go to the User Accounts screen by typing "netplwiz" in the search field of the Start Menu, my primary account would do what I want it to do. Does anybody know how to make an account in Windows 7 a "HomeUsers" account AND an "Administrators" account? As I said before, Windows 7 does this for you automatically when you first set up your computer. But if you change it inadvertently, there is no way to change it back. At least I don't know how to do it. If anybody has any ideas on how to fix this, I would greatly appreciate it. Thanks, Charles Carrington

    Read the article

  • Laptop turning off when fan is spinning hard

    - by Ieyasu Sawada
    My laptop seems to have reach its lifespan. Its an Acer laptop so I guess that's normal. But I'd like to hear your opinions about this. My laptop is only 2 years old. I haven't heard the fan spinning like crazy not until these past 5 months. What I did: Hoping that its just the applications that I have installed that's consume the life of my laptop from the background. I used PC Decrapifier to uninstall some of the things that I don't need. Reformatted my computer but only the primary partition since my files are on the second partition. Bought a cooling pad. None of these works. I noticed the fan spins so hard when: I have a lot of browser tabs open. Full screen mode a flash video that I'm viewing online. Using VLC to watch encoded videos. There's this thing called minicoder http://sourceforge.net/projects/minicoder/ to reduce the size of videos without affecting much of the quality. I'm suspecting that it needs additional software(to make life easier for the hardware) even though the video is working fine in VLC. VLC consumes about 300,000K and above(as seen from task manager) while watching videos (.mkv). The problem: Laptop suddenly turns off when the fan spins like crazy for about 20 minutes. I'm always checking to see if its already too hot(using my fingers to feel the side of the laptop) but its not so I continued watching and then poof! computer turns off. Laptop won't turn on immediately when I turn it on after it turning off by itself. The light for the power goes on but its turns off immediately. I have to wait for about 10-20 seconds before it boots up without problems. So how do I go about this? Is this just normal for Acer laptops after about 2 years of heavy usage (8-12 hours a day)? My usage is heavy but I normally only have a text-editor(sublime) and browser open(chrome). Here's what I got from HW monitor:

    Read the article

  • Why do some games randomly turn my screen a random solid color?

    - by Emlena.PhD
    When playing some games my computer will randomly have an error that I cannot fix without turning it off and back on again. The screen changes to one solid color, which varies (off the top of my head I can remember seeing solid green, magenta, etc..) and the sound blares a single tone. The sound sometimes briefly restores and I can still hear the game sounds and even hear and still be heard by people in my Mumble channel, but the screen doesn't right itself so I'm still blind. What's more is this happens in some games but not in others. While the game is actually running, not while I'm still in the menu. However, it does happen if I'm afk or idle but the game world is still rendering. Games where the error occurs: League of Legends World of Warcraft Trine The Sims 2 Dungeon Defenders Safe games: games where it has never occurred: Tribes: Ascend Star Wars: the Old Republic Battlefield 3 So relatively older games cause the problem while newer games do not? I cannot predict when it will happen, it just seems random. However, if it happens and I try playing the same game further after restart it does appear to occur more frequently after the first time. But if I switch to a safe game it doesn't continue happening. Both of my RAM sticks appear fine, flipped position or either one on their own and games still run, computer still boots. I would think over-heating, but then why not all games? ALso, sometimes it happens immediately after I start playing, within seconds of the 3D world booting up. I'm looking to upgrade very soon so I want to figure out what component or software is fubar and replace/repair it. Any suggestions or recommendations of tools would be helpful. Below is some system information. Dxdiag does not detect any problems. Operating System: Windows 7 Home Premium 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120305-1505) System Manufacturer: Gigabyte Technology Co., Ltd. System Model: EP45-UD3R BIOS: Award Modular BIOS v6.00PG Processor: Intel(R) Core(TM)2 Duo CPU E8500 @ 3.16GHz (2 CPUs), ~3.2GHz Memory: 4096MB RAM DirectX Version: DirectX 11 DxDiag Version: 6.01.7601.17514 64bit Unicode Graphics card name: NVIDIA GeForce GTX 285 Driver Version: 8.17.12.9610 (error has occurred w/several driver versions) Sound: I do not have a sound card, been using motherboard's built in sound)

    Read the article

  • What is it that kills laptop batteries?

    - by Mala
    There are many superstitions on what you must never do lest your battery become worthless - and by worthless I mean hold about 16 - 24 seconds of charge. This has happened to every laptop I have ever owned, and I just got a new one, so please help me sort out fact from fiction. Here are some of the things I've heard: Do not keep your laptop fully charged. You must run it completely down every so often. Do not use your laptop plugged in to the wall. Only plug it in when it needs charge. If you will not be using your laptop for a long period of time, don't leave it at full charge. Do not leave your laptop running 24/7. The first two I know to be complete fiction: this was true of old batteries such as you might have had in an iPod in 2003, but modern batteries function better when kept at or near full charge. Devices even have circuitry to prevent you from completely depleting your battery, as this is dangerous. The third point sounds probable, and I'd be interested to know if it was true. However, it doesn't really apply to me because I'm not really the type to leave my laptop alone for a day, much less a "long period of time" The fourth seems most likely of the above, but only because of causality: I have always done this, and my batteries have always crapped out on me. I've generally treated a laptop like a desktop with a battery backup, and that I can move from one room to another if necessary. The fact that my batteries tend to last less than 30 seconds has further entrenched this behavior. Should I be trying to break this habit? Are there any other things that ruin laptop batteries? I love that I can actually use my new laptop unplugged :) I'd like to keep it that way. Update: Additional question: If the computer will be used for an extended period of time plugged in, does it make sense to remove the battery first? Update 2: I know people with laptops older than mine, who actively use their laptops as much as I do, and their batteries still hold about an hours' charge while mine holds less than 30 seconds, hence my belief that something I'm doing kills them.

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >