Search Results

Search found 3942 results on 158 pages for 'stick it to the man'.

Page 121/158 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

  • Is there any way to force my Linux box to always boot up with a self-assigned IP address?

    - by Jeremy Friesner
    This is perhaps an unusual request: I'm trying to get a Debian Linux box to always give itself a self-assigned IP address (i.e. 169.254.x.y) on boot. In particular, I want it to do that even when there is a DHCP server present on the LAN. That is, it should not request an IP address from the DHCP server. From what I can see in the "man interfaces" text, there is an option for "manual", and an option for "dhcp". Manual assignment won't do, since I need multiple boxes to work on the same LAN without requiring any manual configuration... and "dhcp" does what I want, but only if there is no DHCP server on the LAN. (A requirement is that the functionality of these boxes should not be affected by the presence or absence of a DHCP server). Is there a trick that I can use to get this behavior? EDIT: By "no manual configuration", I mean that I should be able to take this box (headless) to any LAN anywhere, plug in the Ethernet cable, and have it do its thing. I shouldn't have to ssh to the box and edit files to get it working each time it is moved to a different LAN.

    Read the article

  • 3 simple questions about file permissions

    - by Camran
    1- Wonder, is this a good setup of permissions in the /var directory? drwxr-xr-x 2 root root 4096 2010-05-30 03:34 backups drwxr-xr-x 7 root root 4096 2010-05-29 17:55 cache drwxr-xr-x 29 root root 4096 2010-05-29 17:55 lib drwxrwsr-x 2 root staff 4096 2009-07-14 04:36 local drwxrwxrwt 3 root root 60 2010-06-02 03:34 lock drwxr-xr-x 9 root root 4096 2010-06-02 03:34 log drwxrwsr-x 2 root man 4096 2009-09-20 20:36 mail drwxr-xr-x 2 root root 4096 2009-09-20 20:36 opt drwxrwxrwt 12 root root 420 2010-06-02 12:12 run drwxr-xr-x 4 root root 4096 2009-09-20 20:37 spool drwxrwxrwt 2 root root 4096 2009-07-14 04:36 tmp drwxr-xr-x 14 user root 4096 2010-05-30 22:21 www 2- Could you give me a brief explanation of the columns above? First one is which permissions they have. Second is a nr. Third and fourth says "root root" for example. fifth is another nr (4096 for example). and the others are obvious. 3- Could you give me a brief explanation of the folders above? Especially the "lock" and "tmp" folders. Lock contains an apache2 folder which seems empty. Thanks

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Hard Drive Compatibility with Motherboard

    - by Wesley
    Here are the current specs to put things in context: ECS P4VXASD2+ V5.0 Intel Pentium 4 Northwood 2.8 GHz 2x 512MB PC2100 DDR266 SDRAM Maxtor DiamondMax 10 250 GB PATA (IDE) HDD Gigabyte 52x CD-ROM NVIDIA TNT2 Pro 16 MB OKIA 300W ATX PSU USB bracket Modem PCI Before, I actually had a 300 GB hard drive installed. However, I read the FAQ for the motherboard and discovered that a maximum of 250 GB hard drive was supported. So I ended up finding the one listed above and put that in. However, upon booting up, I reset the BIOS to defaults and auto-detected all the drives installed. The 250 GB came up as something like 251.0 GB. I didn't think much about it until I tried to boot up a Windows XP installation disc. It booted up successfully and run for about a minute before the computer randomly rebooted. I've made sure that all the jumpers and settings are correct and everything has been installed correctly. I've tried running it without the addons and one stick of RAM but still the same thing. What else could be causing this problem?

    Read the article

  • Syntax error on line 494 of httpd.conf: Cannot load .../php5apache2_2.dll into server

    - by pikachu
    I have been learning PHP. I had installed Apache-server (not in a combination-suite like USBWebserver). Now I'm trying to put my sites on a portable stick, using USBWebserver. I already used that program before to carry MySQL databases with me (and Apache worked as well, cause I used the included PHPMyAdmin for managing the databases.), but now it doesn't work anymore. When I start the program, I keep having the text saying Apache is offline. I've tried to open Apache using the command line (don't know what that would do, but, it's just a try). I got an error message saying Syntax error on line 494 of C:/.../httpd.conf: Cannot load C:/.../php5apache2_2.dll into server: (The following is translated from Dutch) An initialization routine of the dynamic link library (dll-file) has failed. Line 494 says this: LoadModule php5_module "C:/Users/School/Downloads/USBWebserver v8_en/php/php5apache2_2.dll" My first Apache installation (its service) is not running. The ports are different. And I also uninstalled the service (using the httpd.exe -k uninstall command); What can be the problem? Thanks for help.

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • Windows 7 DVD doesn't boot up, neither does USB. :'(

    - by Manan Shah
    My problem is that i'm not able to install windows 7. Been trying to install this since past 1 week. The methods i've tried are: I have a windows 7 bootable DVD which doesnt boot up. (I've set BIOS to boot from DVD ROM first but it just won't boot from the DVD). Tried to install Windows 7 from the same DVD to a friend's PC and it worked. So the DVD has no issues. I tried to run 'Setup.exe' from within the DVD. The two options pop-up 'Check compatibility' and 'Install now'. On clicking install now, after sometime, an error is encountered with the message 'Windows was unable to create a required installation folder' error code:0x8007000D. I am running Windows XP Professional and there's only one user on the PC which is the Admin, so i do not know why is the setup not getting permissions. I've also uninstalled my antivirus, CD burning software, disabled firewall and disconnected all other devices, but its still the same. I tried to install it from a USB device by making it bootable but that too doesnt work. (Yes the mobo supports booting from the USB). The problem is that XP does not recognize a 'USB' device on boot. Rather it shows this USB stick as a removable 'Hard Drive'. Furthermore, i changed the order of Hard Drive boot to boot from this removable Hard Drive first, it still boots my existing OS. Is there anything else that can be done? Any help would be greatly appreciated. :) Please ask if any other information is required, this post is becomimg increasingly long to add any other details. PS: I want to dual boot windows 7 with my existing XP, but that would be after i manage to run the windows 7 setup in the first place. PPS: Please bare with any 'not-so-technical' terms, i am a beginner with this. Again, thank you for taking the time and trying to help, really appreciate it. :)

    Read the article

  • Mangling traffic from a Mikrotik Router

    - by TiernanO
    I have a MikroTik powered Router in the house with a couple of internet connections (2 200/10Mb Cable modems and a 100/20Mb VDSL Line). I am using Mangle rules to set routing marks and NAT rules to do some load balancing, and everything seems to be going grand... But it only works for traffic from outside the router... Let me explain: I have 4 GigE ports on the machine, WAN1,2 and 3, and a LAN port named LAN1. All traffic from LAN1 is getting mangled (as it should be) but traffic from the load router itself (proxy traffic, IPv6 tunnels, VPN connections) are not being mangled. They get the first route to 0.0.0.0/0, which in my case is WAN2, and stick with it. So, how do I get traffic from the local router to be mangled? Originally it was proxy traffic that caused the problem, but now with IPv6 and VPN, they are more important to be mangled... last time i enabled IPv6 traffic, all traffic only went though WAN2, and the rest where unused... Any ideas?

    Read the article

  • Wired to wireless bridge in Linux

    - by adrianmcmenamin
    I am attempting to set up my Raspberry Pi as a bridge (but I think this is not a question specific to the hardware) - using Debian wheezy. I have a hostapd.conf: (some details changed for security)... interface=wlan0 bridge=br0 driver=nl80211 auth_algs=1 macaddr_acl=0 ignore_broadcast_ssid=0 logger_syslog=-1 logger_syslog_level=0 hw_mode=g ssid=MY_SSID channel=11 wep_default_key=0 wep_key0=MY_KEY wpa=0 (yes, I know WEP is no good) And this in /etc/network/interfaces auto lo iface lo inet loopback iface eth0 inet dhcp allow-hotplug wlan0 iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet dhcp auto br0 iface br0 inet dhcp bridge-ports eth0 wlan0 Everything seems to come up ok, but I cannot associate with the bridged wireless connection - even though the flashing lights on the USB stick suggest packets are being exchanged. I have read somewhere that not all cards/devices will run in hostap mode - they won't pass packets in one direction: is that right? (The info was a bit old)- this my card: [ 3.663245] usb 1-1.3.1: new high-speed USB device number 5 using dwc_otg [ 3.794187] usb 1-1.3.1: New USB device found, idVendor=0cf3, idProduct=9271 [ 3.804321] usb 1-1.3.1: New USB device strings: Mfr=16, Product=32, SerialNumber=48 [ 3.816994] usb 1-1.3.1: Product: USB2.0 WLAN [ 3.823790] usb 1-1.3.1: Manufacturer: ATHEROS [ 3.830645] usb 1-1.3.1: SerialNumber: 12345 So, what have I got wrong here?

    Read the article

  • Borked ubuntu uninstall - need to delete boot partition (i think)

    - by Max Williams
    I just got a new pc laptop with windows 7 and wanted to install Ubuntu on it. Which i did, no problem there, by downloading the installer, burning it to dvd then booting off the dvd and installing. Then, i realised that the new Ubuntu 12.04 uses the Unity desktop, which i immediately disliked, and after some research, began to hate. So, i decided (after a little googling) to install Linux Mint instead. So, thinking i'd better start from scratch, i went to the Windows 7 disk manager and wiped the Ubuntu partition that had been created. Now, when i start up, i get an error from grub, the ubuntu boot manager: error: unknown filesystem grub rescue> _ and a blinking cursor where i can enter commands. I suspect that what i've done is deleted the main ubuntu partition but NOT deleted another partition which is a boot partition, or something like that? Can anyone tell me how i can rescue or unbork this? I'd like to either a) get back to my original windows-only setup OR b) install linux mint off dvd (which i have), into the empty partition, fixing any grub confusion in the process. Any suggestions? Thanks, max BTW please don't answer if you're just going to tell me to stick with 12.04, or install a different distro or something. I definitely want Mint and just want to fix this mess - thanks :)

    Read the article

  • Package pinning in Debian lenny

    - by bronto
    I need your advice as I don't know if I hit a bug, or I am misunderstanding something. On a Debian Lenny, I am trying to prevent the installation of two particular packages, when they are requested as dependencies fromother packages. I am using the same syntax I successfully used in Squeeze, but with no success at all. On squeeze, the following works as expected: # cat /etc/apt/preferences.d/local-no-pike.pref Package: pike7.6-core Pin: version * Pin-Priority: -1000 If I try to install pike7.6, which depends on pike7.6-core, apt and aptitude refuse to do so. On Lenny, the only difference is that there is no support for "fragments" in /etc/apt/preferences.d, and all preferences must be in the /etc/apt/preferences file. But it's not working. E.g., if the file contains: Package: grub-common Pin: version * Pin-Priority: -1000 apt doesn't stop me from installing grub, which depends on grub-common. I used strace to see if the file is being read, and it is. I was suggested to use some Debug:: options, but they didn't help to pinpoint the problem either. I have google'd a lot with some combinations of "lenny" "prevent" "package" "installation" "pinning" and the like, but nothing nice came out. And of course I read man apt_preferences. What am I missing here?

    Read the article

  • daily rsync backups with hard links, checksums, and a new computer

    - by user75058
    I backup my laptop to a Fedora desktop daily using rsync with hard links. This has worked great for almost a year. I recently purchased a new computer, transferred over my data, and would like to continue backing up this computer daily. However, due to the data transfer from the old laptop to the new laptop, the timestamps have obviously changed, and will thus cause my daily rsync backup to re-transfer all of the data. I thought that by adding the -c (checksum) switch to my rsync backup it would match files based on checksum, instead of timestamp and size, and only transfer those files that are different or not present. This appeared to work, but upon examining the new backup, hard links are not being created, and it appears the files that should be hard linked are simply being copied to the new backup directory from the previous backup directory on the backup server. This is very peculiar behavior to me, and I am having trouble figuring out why this is occurring. Checksums match for files that I think should be hard linked. I have looked through the rsync man page and Google'd around a bit and have been unable to find anything for me to better understand this behavior.

    Read the article

  • Flickering dual screens in Virtual Box Ubuntu 13.10 Guest

    - by alexleonard
    I have Ubuntu 13.10 x64 installed as a guest in VirtualBox (under a Windows 8.1 host) and have the settings for the virtual machine setup to run with a monitor count of 2, 128MB video memory and 3D acceleration enabled. In my guest I have the virtual box additions installed (which allowed me to have two 1920x1080 screens). Here's a screenshot of my VM settings. My laptop is an Asus N550JV which has both Intel's HD Graphics 4600 GPU and Nvidia's GeForce GT 750M. By default though I believe the Intel GFX card is being used to render the VM. When I boot up the VM it loads perfectly on dual screens, however whenever I move the mouse from one screen to the other (I have a Dell S2340L running over a HDMI connection as a second screen) the screen flickers. I've tried a variety of settings changes in both Ubuntu and the VM settings, but cannot seem to stop this screen flicker. I also used the NVidia control panel in Windows to force the dedicated graphics card to always be used but found that the display driver sometimes crashed whilst working in the VM, resulting in my VM session being destroyed, so I figured it's better to stick with the Intel GFX as that appears to be more stable. I also tried without 3D acceleration but that was much worse, and if I ran the VM with a low amount of graphics memory it really struggled. Here's my dmesg output: http://pastebin.com/1LJuYWMj (not sure if this is helpful in this situation). I read some posts suggesting changes to /etc/X11/xorg.conf but I don't appear to have an xorg.conf file. There were also a few posts (though related to Synergy) suggesting running xset -dpms but this command doesn't appear to have had any effect for me. As an additional note, I'm finding that window drawing in the guest is a little laggy/glitchy. For example, quickly scrolling through a web page may result in parts of the viewport displaying original content. Certainly I notice drawing issues most in the web browser, but it also impacts other software with parts of the window not being drawn when, say, switching between accounts in thunderbird. Any suggestions greatly appreciated!

    Read the article

  • Openfire: Granular alerts

    - by R.S.
    Our organization has had an Openfire server up and running for about a year now. So far we have used it for messaging in the I.T. Dept and Alerts to all users. We hit a snag this week when one system went down and several notifications were sent out to inform users of progress. Some of the users were Radiologists that do not use the particular system in question and these users found it more of an annoyance than informative. Since that I have been tasked with finding a more granular system for alerts. I am confident that Openfire can handle this and I have just about settled on a way of getting this to work. My idea is to create a half dozen or so users. For example: Staff, Doctor, Assitant and Supervisor. Using spark as our messenger has worked great so far so I would like to stick with that if possible. With that in mind, under advanced login features the resource name can be changed to something unique and non-unique users can log in under the same account, however, when a message is sent to one of these users, the message delivery is inconsistent. Currently I have 4 users under the Assistant user and it seems only 1 of the users receives the messages. Is this scenario even possible? I am avoiding working with the groups in Openfire because the function is atrocious. I could possibly integrate the system into our Active directory but I don’t think that will get us to a workable solution any quicker or more efficiently.

    Read the article

  • DHCPD (Slackware) - Disabling auto-generation of gateway as DNS server

    - by Dogbert
    Good day, I am using a Linux workstation on Slackware 13.37. One "problem" I have had to deal with ever since 11.0 is the following: DNS servers are queried and determined at startup by DHCP daemon (DHCPD) This is invoked at startup by a script located at /etc/rc.d/rc.dhcpd My DNS servers for my ISP are resolved correctly, and are stored in a list located at /etc/resolv.conf However, the one annoying problem is that my gateway IP (ie: 192.168.1.1) is always automatically put at the top of the list in resolv.conf, meaning I have to always wait for a timeout before a valid DNS server is used to resolve an address (ie: timeout on 192.168.1.1 because it is not actually a DNS server, then DHCP uses the next server in the list). I could lower my DNS resolution timeout so the gateway query times out quicker, but that's not what I want, as I don't want to degrade the abilities of legitimate DNS servers. What I would like to do is change how DHCPD operates so that it does NOT put my gateway IP address at the beginning of this list. I've searched via "man dhcpd", etc, and haven't found the exact answer yet. Any help on this issue is appreciated. Thank you all in advance for your time and assistance.

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • I think my laptop just died

    - by Joel Coehoorn
    I have a Dell 1330M that as of about 15 minutes ago will no longer POST. What happened was I was working, stepped away for a moment, and when I came back it was turned off. I thought that was odd, but turned it on and things seemed fine. About 1/2 hour later it crashed and restarted, but came up fine again. It did this once more. At this point I was starting to get worried, but I hadn't had any problems with the laptop before and every crash was after doing some work in a virtual machine that I don't often use, so I at put the blame there. It didn't feel like it was overheating anywhere and there's no ozone smell of overheated electronics. Then it crashed a final time and now when I turn it on all I see is a bright screen with a bunch of vertical lines (noise). I've tried removing the memory sticks one at a time, but I get the same result with either memory stick in either slot. With no memory at all it stops earlier in the POST process and the screen is completely blank (black, no backlight). As I type this, I hear a double beep from the system about once every 10 minutes. I'm pretty sure the hard drive is fine because it fails during post, before anything off the drive is needed. The power supply seems good because the screen is nice and bright. It's not the RAM because swapping that around made no difference. The leaves motherboard (which I doubt and can replace) and CPU (which just might be changable). Any ideas? Is there any hope for this laptop? I'm rather fond of it and I'd have a hard time replacing it with anything near as nice.

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

  • Windows 7 hangs with 100% disk activity but only when online

    - by jeremy
    I have the same problem as seemingly many other people here, and I think we might all be experiencing the same issue: a compatibility issue in Windows 7 between hard drive and network controller or drivers. I've tried firmware updates of my entire board, wiping my drive and reinstalling from scratch. And yet the problem persists, which suggests it is an operating system error, as the hard drive checks out 100% physically. Additionally, the only time it does not occur is when in safe mode WITHOUT networking. With networking, there are spikes in disc access every so often and a huge flow of processes accessing the disc simultaneously that literally "stick" the disc, and physically jolting my computer unsticks it. Again, this has been tested for hours in a professional service environment, and without network access on, things are fine. As soon as there's network access available, the disc access occasionally cranks up to 100% and sticks everything. I'm using Microsoft Security Essentials, but this also happened under Norton, then McAfee. Again, this happened again after a complete wipe, so the likelihood of malware causing it seems low. I don't visit unsecure sites anyway, as far as I know. This, to me, narrows it down to a Windows 7 process that is somehow repeatedly corrupted, perhaps a corrupt .dll or driver, causing a conflict at the operating system level and temporary hard drive failure. I would encourage anyone who knows more about this stuff (which is probably most people!) to take a shot at this one, and I would encourage anyone else with a sticking hard drive in windows 7 64-bit to check on whether it occurs during safe mode without networking.

    Read the article

  • How do I get started with the M-Project is a Mobile HTML5 JavaScript Framework on Windows?

    - by Bruce Whealton
    This website for this great tool, call the M-Project says that I will need to add a doskey like this: doskey espresso=node C:\Path\To\Espresso\bin\espresso.js $1 $2 $3 $4 (It is a tool for creating Native mobile apps with the Phonegap/Cordova library, and it seems to be something that would be very helpful in this process). If I enter that at a command prompt in Windows 7 or 8, it's not going to stick around or persist. Is it an Environment Variable? Then it says at this page: http://www.the-m-project.org/ that it will work with Windows with some additional tools installed. The next line says that Node.js is needed, so I don't know if that is the additional tools mentioned above. Also, in an old discussion I read that one could just install cygwin. What would that do? It doesn't actually install any of the Linux distributions. I did install Ubuntu 12.04 server with VirtualBox because I thought it would be good to learn more about using Linux as I manage websites that are on a dedicated host. Anyway, the suggestion to install cygwin did not go into any details... I guess it would allow one to create a bash profile?? which would only work in a cygwin Command Line Window. Is that right? Isn't there a similar file that one could use in Windows or an Environment Variable that one could set to be able to achieve the same result? Thanks, Bruce

    Read the article

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • Choosing gateway router/firewall for small datacenter network [closed]

    - by rvs
    I'm choosing a gateway router/firewall for small internal network for medium-sized web service. Currently there are 5 servers in internal network, up to 50 http(s) requests/second, up to 1000 simultaneous connections, uplink is 100 Mbit. So, network is relatively small and not very busy and we don't like to buy some pricey monster like cisco or jupiper for this site. Instead we'd like to buy two affordable devices (one for spare), which can handle our workload now and some time in future (it might be up to 2x more in 1 year). I had some experience with Sonicwall NSA, but it seems to be too complex for this site (we don't need most of its features) and even too pricey when buying two of them. So, after some research I've come up with following options: Netgear Prosecure UTM Series (probably UTM25) Zyxel ZyWall Series (USG100 or USG200) Sonicwall TZ 210 Is this a good idea? All of the above seems to be more office products, not datacenter ones. Or we should stick with Sonicwall NSA? Does anyone have any hands-on experience with this models? Maybe some other advices? Thanks.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >