Search Results

Search found 12768 results on 511 pages for 'little snitch'.

Page 442/511 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • PXE-E32 TFTP Open Timeout While Attempting to PXE Boot from Windows Deployment Services

    - by bschafer
    I'm running Windows Deployment Services on Windows Server 2008 R2 on top of an ESX 4.0 box. This is the only function of this VM instance, although it had previously functioned as an AD Domain Controller. My DHCP server is running on our primary Domain Controller, which is also Server 2008 R2, but running on metal. Everything was working perfectly until we recently had our backup generator fail during a power outage, causing all of our servers and networking equipment to lose power for a period of time. When we brought all of our equipment back up, everything was working as expected except for WDS. Our network is split up into several different vlans. Now, depending on which vlan the client computer is on, it's behaving differently when attempting to PXE boot into WDS. Our servers are located on the 10.55.x.x vlan, which, due to the nature of it, has no DHCP server active in it. The first computer we plugged in happened to be in the 10.99.x.x vlan, which is supposed to be reserved for network management devices (i.e. switches), but we've been using it occasionally otherwise. That computer gave us PXE-E11 ARP Timeout errors. When we moved to a different computer on the 10.19.x.x vlan (for general purpose use), it finally gets an IP from DHCP, but it presents us with a very stumping PXE-E32 TFTP Open Timeout error. Before the power outage, it didn't matter which vlan a device was on; it would PXE boot and image just fine. I've made no changes to anything server-side. Everything is configured exactly the same way it was on my WDS and DHCP servers as before the power outage. I've tried several different computers, including different models. All of this, combined with the quirky behavior depending on the vlan, makes me think something went wrong in one or more of our switches, probably because of the power outage. Unfortunately, I'm no network guy, and I know very little about how to configure our switches properly. Is this an issue with switches, etc? If so, how can I fix it? Is there some magical option I'm not aware of? Does anybody out there have any hunches? I've pretty much exhausted my ideas. Our main switch is an HP Procurve 5406. We also have 3x HP Procurve 4208 switches. The ESX Server is an HP ProLiant DL380 G6. The WDS VM is currently using the VMXNET3 network adaptor, but we've also tried the E1000 adaptor.

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • IIS 7.5 FTPS external access - 534 Policy requires SSL

    - by markmnl
    I have setup a FTP site that requires SSL but when I try connect to it externally I get the error: 220 Microsoft FTP Service 534 Policy requires SSL. I know - I set it so! Why doesnt it fetch the SSL cert from the site and allow me to logon?! (Incidentally beware of all the tutorials that Allow but do not Require SSL - while that will solve the problem it will be because SSL is not being used!). I suspect it may be I need a client that supports FTPS (FTP over SSL) and Windows explorer just uses IE which does not. But trying FileZilla and WinSCP I get a little further but then it hangs on TLS/SSL negotiation expecting a response from the server.... UPDATE: I have tried (from: http://learn.iis.net/page.aspx/309/configuring-ftp-firewall-settings/): Configure the Passive Port Range for the FTP Service. Configure the external IPv4 Address for a Specific FTP Site. Configure the firewall to allow the FTP service to listen on all ports that it opens. Disabling stateful FTP filtering so that Windows Firewall will not block FTP traffic. And still I get (in FileZilla trying both Active and Passive): Status: Connecting to 203.x.x.x:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: AUTH TLS Response: 234 AUTH command ok. Expecting TLS Negotiation. Status: Initializing TLS... Error: Connection timed out Error: Could not connect to server The Windows firewall logs unhelpfully have nothing to say.. UPDATE2: Turning the firewall off does not resolve the problem. I cannot believe how difficult it is to get something so simple to work and even once following the documentation it does not work. UPDATE3: Running FileZilla locally connecting through the loopback works in Active mode, in Passive mode I get up to: Command: LIST Response: 150 Opening BINARY mode data connection. Error: GnuTLS error -53: Error in the push function. Turning the firewall off at both ends I can still not connect the client and get the same error as above.

    Read the article

  • Windows 7 is shutting down unexpectedly, according to the logs.

    - by dlamblin
    Here's a message from my eventvwr EventLog (Windows Logs System): The previous system shutdown at 11:51:15 AM on ?7/?29/?2009 was unexpected. This is funny because I was wondering why the system shut down while I was playing Civilizations IV full screen. Now I know. It was unexpected. Has anyone encountered and resolved this? A little background: I am running Windows 7 RC inside VMWare Fusion 2 (just updated a few months back) on a MacBook (Bitterly not Pro) aluminum-body. Windows 7 occasionally will shut down. This isn't a quick turn-off, it's a shutdown where all the programs are exited, the system waits until they quit (and Civ4 doesn't prompt me to save), it even installed Windows Updates before restarting. And yes it is restarting right after the shutdown. Because I run a game in full screen mode I do not notice any dialog with a countdown timer or anything like that that might be a warning. As I have iStat on my dashboard widgets I can see about 8 temperature monitors. I have seen the CPU get up to 74C before, but during the shutdown, though it seemed hot to the touch (always is), it read 61C for the CPU, 60C for heatsink A, 50C for heatsink B and in the 30s-40s for the enclosure and harddrives. As I type this now, the temps are actually higher, so I don't think the temperature caused it. I have at least six such events dating first from 5/17 which was a week after installing Windows 7. I did find one information level warning from USER32 in the system log that says: The process C:\Windows\system32\svchost.exe (DLAMBLIN-WIN7) has initiated the restart of computer DLAMBLIN-WIN7 on behalf of user NT AUTHORITY\SYSTEM for the following reason: Operating System: Recovery (Planned) Reason Code: 0x80020002 Shutdown Type: restart Comment: And another 15 minutes before that from Windows Update: Restart Required: To complete the installation of the following updates, the computer will be restarted within 15 minutes: - Cumulative Security Update for Internet Explorer 8 for Windows 7 Release Candidate for x64-based Systems (KB972260) Which I think kind of explains it. Though I don't know why restarting after an update would create an error event of "shutdown was unexpected", isn't that pretty odd? Now, how do I set it to never restart after an update unless I click something. Application of solution: As fretje reminded me, there's a couple of configurable settings for this, in windows 7 they're much in the same place as in Windows 2000 SP3 and XP SP1. Running gpedit.msc pops up a window that looks like: Windows 7 has changed the order and added a couple of newer options I've italicized: Do not display 'Install Updates and Shut Down' in Shut Down Windows dialog box Do not adjust default option to 'Install Updates and Shut Down' in Shut Down Windows dialog box Enabling Windows Power Management to automatically wake up the system to install scheduled updates Configure Automatic Updates Specify intranet Microsoft update service location Automatic Updates detection frequency Allow non-administrators to receive update notifications Turn on Software Notifications Allow Automatic Updates immediate installation Turn on recommended updates via Automatic Updates No auto-restart with logged-on users for scheduled Automatic Updates Re-prompt for restart with scheduled installations. Delay Restart for scheduled installations Reschedule Automatic Updates schedule

    Read the article

  • Getting macro keys from a razer blackwidow to work on linux

    - by Journeyman Geek
    I picked up a razer blackwidow ultimate that has additional keys meant for macros that are set using a tool that's installed on windows. I'm assuming that these arn't some fancypants joojoo keys and should emit scancodes like any other keys. Firstly is there a standard way to check these scancodes in linux? Secondly how do i set these keys to do things in command line and x based linux setups? My current linux install is xubuntu 10.10, but i'll be switching to kubuntu once i have a few things fixed up. Ideally the answer should be generic and system-wide Things i have tried so far: showkeys from the built in kbd package (in a seperate vt) - macro keys not detected xev - macro keys not detected lsusb and evdev output this ahk script's output suggests the M keys are not outputting standard scancodes Things i need to try snoopy pro + reverse engineering (oh dear) Wireshark - preliminary futzing around seems to indicate no scancodes emitted when what i seem to think is the keyboard is monitored and keys pressed. Might indicate additional keys are a seperate device or need to be initialised somehow. Need to cross reference that with lsusb output from linux, in 3 scenarios - standalone, passed through to a windows VM without the drivers installed, and the same with. LSUSB only detects one device on a standalone linux install It might be useful to check if the mice use the same razer synapse driver , since that means some variation of razercfg might work (not detected. only seems to work for mice) Things i have Have worked out: In a windows system with the driver, the keyboard is seen as a keyboard and a pointing device. And said pointing device uses, in addition to your bog standard mouse drivers.. a driver for something called a razer synapse. Mouse driver seen in linux under evdev and lsusb as well Single Device under OS X apparently, though i have yet to try lsusb equivilent on that Keyboard goes into pulsing backlight mode in OS X upon initialisation with the driver. This should probably indicate that there's some initialisation sequence sent to the keyboard on activation. They are, in fact, fancypants joojoo keys. Extending this question a little I have access to a windows system so if i need to use any tools on that to help answer the question, its fine. I can also try it on systems with and without the config utility. The expected end result is still to make those keys usable on linux however. I also realise this is a very specific family of hardware. I would be willing to test anything that makes sense on a linux system if i have detailed instructions - this should open up the question to people who have linux skills, but no access to this keyboard The minimum end result i require I need these keys detected, and usable in any fashion on any of the current graphical mainstream ubuntu varients

    Read the article

  • swapping or thrashing with vast amounts of unmapped pagecache

    - by Marco
    EDIT: I noticed that this is more appropriate for superuser.com, I apologize. I don't know how to delete this question. I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • svn 503 error when commit new files

    - by philipp
    I am struggling with a strange error when I try to commit my repository. I have a V-server with webmin installed on it. Via Webmin I installed an svn module, created repositories and everything worked fine until three days ago. Trying to commit brings the following error: Commit failed (details follow): Server sent unexpected return value (503 Service unavailable) in response to PROPFIND request for '/svn/rle/!svn/wrk/a1f963a7-0a33-fa48-bfde-183ea06ab958/RLE/.htaccess' Server sent unexpected return value (503 Service unavailable) in response to PROPFIND request for '/svn/rle/RLE/.htaccess' I google everywhere and found only very few solutions. One indicated that a wrong error document is set, another one dealt about the problem that filenames might cause this error and last but not least a wrong proxy configuration in the local svn config could be the reason. After trying all of the solutions suggested I could not reach anything. Only after a server reboot there was a little difference in the error-message, telling me that the server was not able to move a temp file, because the operation was permitted. So I also controlled the permissions of the svn directory, but also with no success. An svn update than restored the "normal" error from above and nothing changed since then. The only change I made on the server, I guess that this could be the reason why svn does not work anymore, was to install the php5_mysql module for apache via apt-get install php5_mysql. Atg the moment I have totally no idea where I could search. I don't know if the problem is on my server or in my repository and I would be glad to get any hint to solve this. Thanks in advance Greetings philipp error log: [Tue Oct 25 19:23:02 2011] [error] [client 217.50.254.18] Could not create activity /svn/rle/!svn/act/d8dd436f-d014-f047-8e87-01baac46a593. [500, #0] Tue Oct 25 19:23:02 2011] [error] [client 217.50.254.18] could not begin a transaction [500, #1] [Tue Oct 25 19:24:21 2011] [error] [client 217.50.254.18] Could not create activity /svn/rle/!svn/act/adac52c2-6f46-f540-b218-2f2ff03b51a4. [500, #0] http.conf: <Location /svn> DAV svn SVNParentPath /home/xxx/svn AuthType Basic AuthName xxx.de AuthUserFile /home/xxx/etc/svn.basic.passwd Require valid-user AuthzSVNAccessFile /home/xxx/etc/svn-access.conf Satisfy Any ErrorDocument 404 default RewriteEngine off </Location> The permissions for the repository directory are : rwxrwxrwx (0777). the directory /svn/rle/!svn/act/adac52c2-6f46-f540-b218-2f2ff03b51a4 does not exist on the server. I think this is part of the repository. So, I just just want to admit that i tried to reach the repository via Browser and i worked, I could see everything, so the error only occurs when I try to commit new files. I also created a second repository and tried to commit files in there, what gave me the same error.

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • Sendmail - Multiple Domains, One Box - Blocking One Or Two Domains

    - by TangoOversway
    I have a number of domains hosted at a web hosting service. They use sendmail to handle incoming email. I have six domains on this service (which we can call aaa.com, bbb.com and so on). Each email account has the same name and one email box. In other words, [email protected], [email protected], [email protected] and all the others go into one box, /var/spool/mail/tango, where my email program on my desktop picks it up. I have done very little work in sendmail. I haven't had to, and I've been warned it's a steep learning curve. But now I'm running into an issue. I was in a business situation where, for years, my email address was on the website for aaa.com. (We won't go into why this was necessary - it wasn't my preference and it's in the past.) Now I'm using [email protected] instead of [email protected]. I was getting about 1,000 or more pieces of spam a day, but SpamAssassin and my own email program caught about 75% of that. (Which still left stuff to delete.) Now, after checking, I see that 90% or more goes to [email protected], the one that was on the web for years. I'd like to deactivate [email protected] and possibly [email protected] and [email protected], but want to keep using [email protected]. Remember, email to tango at any of these domains will go into one email box. I've had people tell me that sendmail can be configured so I can deactivate [email protected] (and other domains) and still use [email protected] (and others, if I want to). In other words, I can configure sendmail to use this account on some domains and not others. One of the people who was teling me this was in tech support at the hosting service. But I wrote to tech support with a work order to do this and now I'm told it can't be done. I can modify config files myself on this account if needed, but I was hoping to just let them do it. (I love delegation -- it means I spend more time doing my stuff.) Is it possible to keep an email account active on one domain and not others with sendmail, when all domains are hosted on the same server? Is there a name for this process or setting? Any information would be helpful - either pointers to instructions so I can do it, or enough info so I can tell tech support, "This is where to look, and it can be done, so please pass my request on to someone who works with sendmail and knows how to do it." Is this something sendmail can do?

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • Outlook Anywhere inconsistencies with authentication methods

    - by gravyface
    So I've read this question and attempted just about every other workaround I've found online. Problem seems completely illogical to me, anyways: SBS 2011, vanilla install; haven't touched anything in IIS or Exchange outside of what's been done through the checklist (brand new domain, completely new customer) except to import an existing wildcard certificate for *.example.com (which is valid, Remote Web Workplace and Outlook Web Access work fine). On the two test machines and one production machine running a mixture of Windows XP Pro, Windows 7 and Outlook 2003 through to 2010, I've had no problem saving the password after configuring Outlook Anywhere using the wrong authentication method. I repeat, I have had no issues using the wrong authentication method on these test machines; password saves the first time, no problem, can verify it exists in the credentials manager (Start Run control userpasswords2), close Outlook, reboot, go make a sammie, come back, credentials are still saved. When I say wrong, it's because I was choosing NTLM and Exchange (under Exchange Console Server Configuration Client Access) was set by default to use Basic. On two completely different machines setup by a co-worker, they had (under my guidance) used NTLM as well... except that frustratingly, Outlook would always ask for a password. One machine was Windows XP with Outlook 2010, the other was Windows 7 with Outlook 2003. When these two machines were set to use Basic -- the correct settings -- the option to save was there and now works without issue. Puzzled by how my machines could possibly work with the wrong authentication, I then went into one of them and changed the authentication method to Basic. Now here's where it gets a little crazy: if I go under Outlook and change the authentication to use the correct setting (Basic) it fails to save the password and Outlook prompts every time (without a "remember me" checkbox). I have not had a chance to change it to Basic on the other two machines to see if this is just a fluke or not, but something just isn't right here. My two hunches are either a missing/installed KB Update or perhaps a local security policy. I should add that none of the 5 test machines in the equation here have ever been joined to the domain.

    Read the article

  • Ubuntu and Belkin N150 f6d4050 Wireless USB adapter v2

    - by Andrew
    I'm new to Ubuntu, and I'm trying to get my Belkin USB adapter to work. There are plenty of discussions out there already about this, but none really helped me out. Here's what I've done - Installed ndiswrapper Installed ndisgtk Installed the driver (rt2870.inf) via ndisgtk ndisgtk reported that the driver was installed and the hardware was present. The green light on the adapter is solid green, which I assume means that Ubuntu is aware of it's presence. However, when I click the little wireless symbol at the navigation bar, there's no option to choose my adapter (assuming that it's supposed to show up there...) My adapter version is F6D4050 - Where do I go from here? I'm a Ubuntu newb, so speak slowly. :P lsusb - andrew@ubuntu:~$ lsusb Bus 002 Device 003: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Bus 002 Device 002: ID 04f9:0229 Brother Industries, Ltd Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 050d:935b Belkin Components Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsmod - andrew@ubuntu:~$ lsmod Module Size Used by binfmt_misc 7960 1 fbcon 39270 71 tileblit 2487 1 fbcon font 8053 1 fbcon bitblit 5811 1 fbcon softcursor 1565 1 bitblit vga16fb 12757 0 vgastate 9857 1 vga16fb snd_cmipci 37557 2 snd_intel8x0 31155 2 snd_ac97_codec 125394 1 snd_intel8x0 ac97_bus 1450 1 snd_ac97_codec snd_mpu401 6875 0 snd_pcm_oss 41394 0 snd_mixer_oss 16299 1 snd_pcm_oss snd_pcm 87882 4 snd_cmipci,snd_intel8x0,snd_ac97_codec,snd_pcm_oss snd_opl3_lib 10846 1 snd_cmipci snd_hwdep 6924 1 snd_opl3_lib snd_mpu401_uart 6857 2 snd_cmipci,snd_mpu401 snd_seq_dummy 1782 0 snd_seq_oss 31219 0 snd_seq_midi 5829 0 snd_rawmidi 23420 2 snd_mpu401_uart,snd_seq_midi snd_seq_midi_event 7267 2 snd_seq_oss,snd_seq_midi snd_seq 57481 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event nouveau 515227 2 ttm 60847 1 nouveau snd_timer 23649 3 snd_pcm,snd_opl3_lib,snd_seq snd_seq_device 6888 6 snd_opl3_lib,snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq ns558 3704 0 ppdev 6375 0 drm_kms_helper 30742 1 nouveau joydev 11072 0 ndiswrapper 244768 0 gameport 10966 3 snd_cmipci,ns558 usblp 12407 0 asus_atk0110 10033 0 parport_pc 29958 1 serio_raw 4918 0 drm 199204 4 nouveau,ttm,drm_kms_helper i2c_algo_bit 6024 1 nouveau edac_core 45423 0 edac_mce_amd 9278 0 k8temp 3912 0 snd 71106 23 snd_cmipci,snd_intel8x0,snd_ac97_codec,snd_mpu401,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_opl3_lib,snd_hwdep,snd_mpu401_u art,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 8052 1 snd snd_page_alloc 8500 2 snd_intel8x0,snd_pcm i2c_nforce2 6099 0 lp 9336 0 parport 37160 3 ppdev,parport_pc,lp hid_logitech 8820 0 ff_memless 5109 1 hid_logitech ohci1394 30260 0 usbhid 41084 1 hid_logitech hid 83440 2 hid_logitech,usbhid usb_storage 49833 0 skge 41049 0 ieee1394 94771 1 ohci1394 sata_sil 8895 0 forcedeth 55592 0 sata_nv 23778 1 pata_amd 11962 1 floppy 63156 0

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • Virtual Network Interface and NAT disables localhost access for MySQL and Apache

    - by Interarticle
    I'm running an Ubuntu Server 12.04, and recently I configured it to do NAT for my laptop. Since the server has only one NIC, I followed instructions online to create a virtual network device (eth0:0) that has a LAN IP address, then further configured iptables and UFW to allow internet sharing. However, just a few days ago, I discovered that one of the PHP pages hosted on the server failed for no apparent reason. A little digging revealed that the MySQL server started refusing connections from localhost. The same happened with a page (PhpMyAdmin) that was configured to be accessible only from localhost (in Apache2). The error, as shown by $mysql --protocol=tcp -u root -p looks like ERROR 1130 (HY000): Host '<host name of eth0>' is not allowed to connect to this MySQL server However, the funny thing is, I configured the mysql server to allow root access from localhost (only). Moreover, the mysql server listens only on 127.0.0.1:3306, as shown by: sudo netstat -npa | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1029/mysqld which means that the connection could have only come from 127.0.0.1 (Note that MySQL is working because I can still connect to it via unix domain sockets) In effect, it seems that all tcp connections originating from 127.0.0.1 to 127.0.0.1 appear to any local daemon to come from the eth0 IP address. Indeed, apache2 allowed me to access PhpMyAdmin after I added allow <eth0 IP address>. The following are my network configurations (redacted): /etc/hosts: 127.0.0.1 localhost 211.x.x.x <host name of eth0> <server name> #IPv6 Defaults follows .... /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 211.x.x.x netmask 255.255.255.0 gateway 211.x.x.x dns-nameservers 8.8.8.8 # dns-* options are implemented by the resolvconf package, if installed dns-search xxxxxxx.com hwaddress ether xx:xx:xx:xx:xx:xx auto eth0:0 iface eth0:0 inet static address 192.168.57.254 netmask 255.255.254.0 broadcast 192.168.57.255 network 192.168.57.0 /etc/ufw/sysctl.conf: #Uncommented the following lines net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 /etc/default/ufw: DEFAULT_FORWARD_POLICY="ACCEPT" #Changed DROP to ACCEPT /etc/init/internet-sharing.conf (upstart script I wrote), section pre-start script: iptables -A FORWARD -o eth0 -i eth0:0 -s 192.168.57.22 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE Note again that my problem here is that programs cannot access localhost tcp services, from the server itself, and that access is blocked because the services have access control allowing only 127.0.0.1. I have no problem connecting (as in TCP connections) to services via tcp, even if the services listen only on 127.0.0.1. I do NOT want to connect to the services from another computer.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • script to recursively check for and select dependencies

    - by rp.sullivan
    I have written a script that does this but it is one of my first scripts ever so i am sure there is a better way:) Let me know how you would go about doing this. I'm looking for a simple yet efficient way to do this. Here is some important background info: ( It might be a little confusing but hopefully by the end it will make sense. ) 1) This image shows the structure/location of the relevant dirs and files. 2) The packages.file located at ./config/default/config/packages is a space delimited file. field5 is the "package name" which i will call $a for explanations sake. field4 is the name of the dir containing the $a.dir i will call $b field1 shows if the package is selected or not, "X"(capital x) for selected and "O"(capital o as in orange) for not selected. Here is an example of what the packages.file might contain: ... X ---3------ 104.800 database gdbm 1.8.3 / base/library CROSS 0 O -1---5---- 105.000 base libiconv 1.13.1 / base/tool CROSS 0 X 01---5---- 105.000 base pkgconfig 0.25 / base/tool CROSS 0 X -1-3------ 105.000 base texinfo 4.13a / base/tool CROSS DIETLIBC 0 O -----5---- 105.000 develop duma 2_5_15 / base/development CROSS NOPARALLEL 0 O -----5---- 105.000 develop electricfence 2_4_13 / base/development CROSS 0 O -----5---- 105.000 develop gnupth 2.0.7 / extra/development CROSS NOPARALLEL FPIC-QUIRK 0 ... 3) For almost every package listed in the "packages.file" there is a corresponding ".cache file" The .cache file for package $a would be located at ./package/$b/$a/$a.cache The .cache files contain a list of dependencies for that particular package. Here is an example of one of the .cache files might look like. Note that the dependencies are field2 of lines containing "[DEP]" These dependencies are all names of packages in the "package.file" [TIMESTAMP] 1134178701 Sat Dec 10 02:38:21 2005 [BUILDTIME] 295 (9) [SIZE] 11.64 MB, 191 files [DEP] 00-dirtree [DEP] bash [DEP] binutils [DEP] bzip2 [DEP] cf [DEP] coreutils ... So with all that in mind... I'm looking for a shell script that: From within the "main dir" Looks at the ./config/default/config/packages file and finds the "selected" packages and reads the corresponding .cache Then compiles a list of dependencies that excludes the already selected packages Then selects the dependencies (by changing field1 to X) in the ./config/default/config/packages file and repeats until all the dependencies are met Note: The script will ultimately end up in the "scripts dir" and be called from the "main dir". If this is not clear let me know what need clarification. For those interested I'm playing around with T2 SDE. If you are into playing around with linux it might be worth taking a look.

    Read the article

  • Can an administration extraction of an MSI file perform registry and/or system wide changes?

    - by Wil
    I am always getting MSI (or setup EXEs which are basically MSI) files, and half the time they really do not need to be a setup. Microsoft is probably one of the biggest sources - almost every time I want to download a little source code sample, it has a MSI which if you install, only usually has three files. I would rather not do an install and add it to the add/remove programs and who knows what else (although I am sure it wouldn't be that bad) for the sake of three files! For this reason, I always use the following command: MSIEXEC /a <filename.msi> /qb TARGETDIR=<directory name> Now, this works fine and I have never had problems... However, I was just browsing some articles on Technet and found the following resource about administration installs. Apparently, MSI files can have two sequences: The AdminUISequence Table and the AdminExecuteSequence Table. I am not so worried about the AdminUISequence Table as it states that "The installer skips the actions in this table if the user interface level is set to basic UI or no UI", and this is what the /qb switch I use does. However, there is nothing similar written against AdminExecuteSequence Table. I realise that many people who write MSI files simply do it for a single end user and probably do not even touch the admin install options, however, is it possible for them to set items that can affect the system and if so, is there a fail proof way of extracting? I do already use 7-zip, however despite it being on the "supported" page, MSI support is lacking... well... completely sucks. It looses the file names and is generally useless. They have a bug which was closed with no reason/resolution over three years ago, and I opened a forum post and haven't had a reply. I would not really want to install any additional programs if I could help it and just want peoples opinions on this. Thanks. edit - Should also say, I run with UAC on, and I have never ever had a elevation prompt whilst performing the MSIEXEC operation, so I am guessing I have never had a system wide change, however, I am still curious as to if it is possible... As if changes (even just to the user) are possible I would do this locally/in a VM and never on a server or place of importance!

    Read the article

  • HP DAT72x6 autoloader

    - by ericmayo
    Hoping someone here has seen this similar issue and can offer soem advise... I have an HP DAT72x6 auto loader tape backup unit. The external kind, here is a link to an owner's manual I found of it. http://www.dectrader.com/docs/set2/emr_na-c00070400-1.pdf I purchased the unit used about 6 months ago. The unit stopped working after 3-4 back-ups, it's used one day a month to do a monthly backup of another system. Suffice it to say the unit gets very little usage. There is an amber light on the front of the unit called the OAR (Operator Attention Required). The manual states to call for service when this light comes on and stays on. I've tried a few things to resolve but none are working. I've tried power cycling, re-securing the SCSI cables at both ends. Unit was used so I didn't pay much ($500) and so I don't want to spend a lot to have it fixed; might as well buy something new one if fixing this is going to cost more than $100-$150 bucks. I'm curious to see if anyone here has been around these devices or possibly is an HP repair person that can give me some things to try to resolve. The manual states that a solid amber OAR light indicates a hardware failure. When I power cycle the unit I see one of two scenarios so far. The unit powers up, shows self test in the LCD, then LCD changes to show all possible images and the OAR light comes on. The unit powers up, LCD is completely blank, the green lights go through some sort of process of going on and off and later the amber OAR light comes on and stays on. If it's a simple misalignment issue, I may be able to fix myself but not knowing what could cause the OAR light to come on gives me no where to even start. Google around gave no help either. I hoping someone here has experience with this and can help or point me in the right direction. Also, I don't have the HP Diagnostic tools mentioned in many manuals. The unit is connected to a Linux box. The 3-4 backups I've done with it so far have had no issues. We run amanda backup. Before this incident the unit was backing up and reading tapes fine. Thanks for any help or suggestions.

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • Internal/external DNS with subdomains

    - by ScottMcGready
    I've got an internal DNS server (part of OS X server) and it's acting as the main DNS server for a specific (physical) site. When it can't resolve hostnames itself, it forwards requests to Google's DNS servers. Everything works well apart from a couple of issues, which I think may be related but can't get to the bottom of. I've got a number of intranet sites setup, that people can access by going to something like: intranet.mydomainname.com selfservice.mydomainname.com These point to various servers in the building that host these sites. Whether internal or external (without VPN), I can access these sites just dandy. Where the issue comes is when I want to host, say, test.mydomainname.com on an external server it fails to resolve as the primary zone for mydomainname.com is internal. How can I get it to look up Google's DNS (or an external one) for that zone if it's not in the list? I've tried everything I can think (adding my host's nameservers etc) of but nothing seems to work fully. Also I can't access intranet sites when connected via VPN and from what I can gather - I believe this might be related to the DNS issue but just wanted to give as much information as possible. Edit The domain mydomainname.com is hosted externally and pointed at the site's public IP. From there we can forward the requests to the relevant internal server. Externally everything works, internally though any subdomain of mydomainname.com is served locally, I want it to be served from Google's DNS / externally. DNS Configuration As per a request, here's the current DNS configuration (OS X server's DNS tab). I've blurred out the .private address as it's not really relevant but it's the server's name. The colored dots are just there to link everything together. Screenshot: In an attempt to clarify this is what I want: intranet.mydomain.com -> 192.168.0.12 selfservice.mydomain.com -> 192.168.0.13 *.mydomain.com -> forward to external DNS mydomain.com -> forward to external DNS At the moment any subdomain of mydomain.com is not forwarded on (think this is because of the primary zone being mydomain.com with a NS of intranet.mydomain.com but could do with a little nod in the right direction.

    Read the article

  • Why do Ping and Dig provide different IP address than nslookup?

    - by user1032531
    When pinging my domain name which points to my home public IP from two different servers on my LAN, it shows them pinging different IP. Further investigation shows dig and nslookup providing different results. See below. A little history. My IP used to be 11.22.33.444 and is hosted by Comcast. I changed routers, and it somehow got changed to 55.66.77.888. I've since updated my 1and1 domain name to point to the 55.66.77.888. desktop is a basic server, runs the web server, and connects wirelessly to my LAN. laptop is a GUI and connected via CAT5. Both operate Centos6.4. My old router was a D-Link, and used their "Virtual Server" feature to pass port 80 to desktop. My new router is a Linksys, and I use their "Port Forwarding" feature to pass port 80 to desktop (however, I haven't gotten this part working yet). What is going on??? Why the different IPs? Obviously, it most somehow be stored on the server, but why does the actual machine even know the public IP since it is on a LAN? How do I purge the old IP? [root@desktop etc]# dig +short myDomain.com 11.22.33.444 [root@desktop etc]# nslookup www.myDomain.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@desktop etc]# dig myDomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> myDomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13822 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myDomain.com. IN A ;; ANSWER SECTION: myDomain.com. 16031 IN A 11.22.33.444 ;; Query time: 21 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Oct 21 04:36:52 2013 ;; MSG SIZE rcvd: 44 [root@desktop etc]# [root@laptop ~]# dig +short myDomain.com 55.66.77.888 [root@laptop ~]# nslookup www.myDomain.com Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@laptop ~]#

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >