Search Results

Search found 7606 results on 305 pages for 'raam dev'.

Page 252/305 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • 2nd instance of mysql closes/doesnt start no warnings/errors?

    - by acidzombie24
    I have an external HD and i'd like to run a 2nd mysql instance on it. I used the windows installer to install/configure mysqld as a service on windows7. I took the my.ini from C:\Program Files\MySQL\MySQL Server 5.5\my.ini Then edited the port (client and mysqld), datadir and innodb_data_home_dir. After running this command "C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" --defaults-file="f:/dev/my.ini" I found an error which was all about the innodb_data_home_dir directory not existing. After that I ran the command again. Mysqld simply starts up for a second then immediately closes. I see no message in my command prompt. I know this command line args are correct as i see the mysqld service using the same one except a different my.ini path. Also it did tell me about the directory not existing so i know it is reading the new ini file. How do i figure out why this 2nd instance of mysqld is closing? How do i get 2 instance running? I'm using v 5.5

    Read the article

  • Failed loading ioncube

    - by time
    I recently upgraded a small server to Ubuntu 12.10 (from 12.04), thus upgrading PHP from 5.3 to 5.4. However, I'm getting this in root's mailbox several times a day: Subject: Cron <root@xxxxxxx> [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -ignore_readdir_race -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> Message-Id: xxxxxxxxxxxxxxxxxxxxxxxx Date: Sun, 9 Dec 2012 05:09:02 -0500 (EST) Failed loading /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so: /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so: undefined symbol: php_body_write I assume that's coming up because it's for PHP 5.3. How can I just get rid of ioncube? I have no need for it, I don't even remember installing it. That .so file doesn't exist, and I've grep'd several locations for "ioncube" and I can't seem to figure how to stop that message from flooding the mailbox.

    Read the article

  • Getting Windows (VMware) to load from OSX's localhost without an Internet Connection

    - by Jonah Goldstein
    I'm using MAMP to host my local sites, and VirtualHostX so that I can access sites during local development via a convenient URL like mysite.dev I'm also running Windows XP via VirtualBox, and it would be great to be able to load up any of my local sites within windows while offline as currently often working without access, on the move, unfortunately. I know that I can append my IP and a nice domain name to the host file in C:/WINDOWS/system32/drivers/etc ... and i can find my IP simply through terminal with "ifconfig" while I'm online. The problem is that when I'm not online, there's no IP. Even if there is an IP (when i have a connection), I still have grab it and update the windows hosts' file all the time, since I'm developing from a laptop and have a new IP at the drop of a dime. I found a tutorial where the author is able to get a permanent IP. He uses VMware Fusion as his VMachine, which is the only difference between his setup and mine. By running the terminal command "ifconfig vmnet1" he gets: a secret IP the virtual machine uses to talk to OSX And that doesn't change - which is awesome. I'm assuming it exists even if he's offline. His tutorial is here, http://bit.ly/U2lq It would be pretty fantabulous if I could replicate this with virtualBox. Anyone have ideas? Thanks:)

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • VNC on Xen failure

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • Ubuntu Device-mapper seems to be invincible!

    - by Andrew Bolster
    I'm working on a hopefully unrelated question question and I've got to a strange situation. First: I know very little about the very low level hardware kernal storage driver magix, so I'm hoping a) someone can help and b) someone can explain it to me better. I've been trying a dozen different configurations of my 2x500GB SATA drives over the past few hours involving switching between ACHI/IDE/RAID in my bios; After each attempt I've reset the bios option, booted into a live CD, deleting partitions and rewriting partition tables left on the drives. Now, however, I've been sitting with a /dev/mapper/nvidia_XXXXXXX1 that seems to be impossible to kill! its the only 'partition' that i see in the Ubuntu install (but I can see the others in parted) but it is only the size of one of the drives, and I know I did not set any RAID levels other than RAID0. Anyone have any ideas how I can kill this and get back to just two independent IDE drives? Or can anyone convince me of a reason to go the AHCI route? Many thanks in advance.

    Read the article

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • virtualisation with kvm: export services from guest to the host

    - by ascobol
    Hello, I would like to export some services from the guest os to the host os, via kvm, and by the same way learn some things about networking. I have tried the following commands: In the host (kubuntu 10.4): $ sudo tunctl -u ascobol Set 'tap0' persistent and owned by uid 2401 $ sudo ifconfig tap0 192.168.2.1 netmask 255.255.255.0 broadcast 192.168.2.255 The ifconfig command returns: $ /sbin/ifconfig tap0 Link encap:Ethernet HWaddr 3e:4e:e3:cc:bc:92 inet addr:192.168.2.1 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::3c4e:e3ff:fecc:bc92/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:17 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Then I run the virtual machine (ubuntu server 10.4): $ sudo kvm -hda ubuntuserver104.qcow2 -net nic -net tap,name=tap0,script=no (I'm using sudo because without it fails with the following message:) warning: could not configure /dev/net/tun: no virtual network emulation With sudo the virtual machine boots, I just get this message: pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" In the virtual machine: $ ifconfig eth0 192.168.2.2 netmask 255.255.255.0 broadcast 192.168.2.255 Now if I run: $ ssh 192.168.2.2 I just get a No route to host What is wrong with this setup ? Thanks !

    Read the article

  • Getting 502 instead of 503 when all backend servers are down running HAProxy behind Apache

    - by scarba05
    I'm testing running HAProxy as a dedicated load balancer behind Apache 2.2, replacing our current configuration where we use Apache's load balancer. In our current, Apache only, set-up if all the backend (origin) servers are down Apache will serve a 503 service unavailable message. With HAProxy I get a 502 bad gateway response. I'm using a simple reverse proxy rewrite rule in Apache RewriteRule ^/(.*) http://127.0.0.1:8000/$1 [last,proxy] In HAProxy I have the following (running in default tcp mode) defaults log global option tcp-smart-accept timeout connect 7s timeout client 60s timeout queue 120s timeout server 60s listen my_server 127.0.0.1:8000 balance leastconn server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 Testing connecting directly to the load balancer when the backend servers are down: [root@dev ~]# wget http://127.0.0.1:8000/ test.html --2012-05-28 11:45:28-- http://127.0.0.1:8000/ Connecting to 127.0.0.1:8000... connected. HTTP request sent, awaiting response... No data received. So presumably this is down to the fact that HAProxy accepts the connection and then closes it.

    Read the article

  • Booting Debian5 (Lenny) on 2.6.16 Kernel

    - by bk
    Due to a proprietary kernel module that I don't have the source to and is very picky about what kernel versions it will load into (even with modprobe --f), I find myself in need of running a 2.6.16.XX kernel on my Debian5 machine. The machine boots fine with the 2.6.26-2 stock kernel, and I have successfully build and booted 2.6.26 and 2.6.31 based kernels by making a .deb and the ndoing dpkg -i. However, when I do the same approach for 2.6.16, the kernel hangs at boot. I'm testing this in a VMWare image, so I don't think its an issue of newer hardware not supported by the older kernel. For a working kernel, at boot I get: Uncompressing Linux.. OK booting the kernel Loading, please wait... mdadm: No devices listed in the conf file were found kinit name_to_dev_t /dev/hda5 (dev5,3) ... With 2.6.16.60, I never get the kinit message. It hangs after the mdadm line. There are no mdadm arrays on this machine, so I doubt its an issue inside the mdadm stuff, which is supposed to just error out as it does in the 2.6.26 case above, but for some reason I'm getting stuck getting into kinit. I've been banging my head against this wall so I'm very open to suggestions on how to go about troubleshooting this.

    Read the article

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • Using git through cygwin on windows 8

    - by 9point6
    I've got a windows 8 dev preview (not sure if it's relevant, but I never had this hassle on w7) machine and I'm trying to clone a git repo from github. The problem is that my ~/.ssh/id_rsa has 440 permissions and it needs to be 400. I've tried chmodding it but the any changes on the user permissions gets reflected in the group permissions (i.e. chmod 600 results in 660, etc). This appears to be constant throughout any file in the whole filesystem. I've tried messing with the ACLs but to no avail (full control on my user and deny everyone resulted in 000) here's a few outputs to help: $ git clone [removed] Cloning into [removed]... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0660 for '/home/john/.ssh/id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: /home/john/.ssh/id_rsa Permission denied (publickey). fatal: The remote end hung up unexpectedly $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ chmod -v 400 ~/.ssh/id_rsa mode of `/home/john/.ssh/id_rsa' changed from 0440 (r--r-----) to 0400 (r--------) $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ set | grep CYGWIN CYGWIN='sbmntsec ntsec server ntea' I realize I could use msysgit or something, but I'd prefer to be able to do everything from a single terminal Edit: Msysgit doesn't work either for the same reasons

    Read the article

  • Force local IP traffic to an external interface

    - by calandoa
    I have a machine with several interfaces that I can configure as I want, for instance: eth1: 192.168.1.1 eth2: 192.168.2.2 I would like to forward all the traffic sent to one of these local addresses through the other interface. For instance, all requests to an iperf, ftp, http server at 192.168.1.1 should be not just routed internally, but forwarded through eth2 (and the external network will take care of re-routing the packet to eth1). I tried and looked at several commands, like iptables, ip route, etc... but nothing worked. The closest behavior I could get was done with: ip route change to 192.168.1.1/24 dev eth2 which send all 192.168.1.x on eth2, except for 192.168.1.1 which is still routed internally. May be I could then do NAT forwarding of all traffic directed to fake 192.168.1.2 on eth1, rerouted to 192.168.1.1 internally? I am actually struggling with iptables, but it is too tough for me. The goal of this setup is to do interface driver testing without using two PCs. I am using Linux, but if you know how to do that with Windows, I'll buy it!

    Read the article

  • Recovering a damaged microSDHC

    - by djechelon
    I just bought from eBay a Kingston 32GB microSDHC that was advertised as defective. The seller said that there could be formatting problems or with transfer of large files. Unfortunately, when I got it, it was a total mess. My Nikon camera doesn't read it at all (OK, maybe it doesn't support 32GB) My Linux laptop doesn't mount it: can't read superblock The same laptop refuses to mkfs.msdos because it failed whilst writing reserved sector The same laptop, under Windows, doesn't read nor format the card HTC HD2 mounts the MMC, allows me to write via USB, but is unable to open the just written files OK, folks, now you would say I would have to go through Paypal complaint... that's not that easy. I consciously bought a half-price card that was known to show some defects, and Paypal complaints take time. Obviously, I can't accept somebody sold me a completely use-less computer decoration. So I'll keep it as last option. My question is Do you know a way, under either Linux or Windows, to thoroughly scan, test and possibly repair memory cards, even if I have to lose some percentage of space because of bad sectors? If I can keep at least half of the card intact it would certainly be fine. I used to do broken sector marking with hard disks in the past. I almost forgot: MONSTR:/home/djechelon # fsck /dev/mmcblk0p1 fsck from util-linux-ng 2.17.2 dosfsck 3.0.9, 31 Jan 2010, FAT32, LFN Read 512 bytes at 0:Input/output error

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

  • Linux: How do I remove bootchart from the boot process?

    - by java.is.for.desktop
    Hello, everyone! I have OpenSUSE 11.2. I removed bootchart and forgot to run mkinitrd. Now, right at the start of the boot process, I get boot/93-bootchart.sh: line 17: 462 Terminated stopinitrd 5 I Can't find any 93-bootchart.sh anywhere. Failsafe boot mode doesn't help. Earlier I got an error message about non existing /sbin/bootchartd, but I just copied /bin/cat to /sbin/bootchartd using a GParted boot disk. I tried to use chroot with an OpenSUSE boot disk, but mkinitrd can't find the root device, which is there actually (/dev/sda5). How can I make my system boot again? EDIT Ok, now I managed to re-install the bootchart rpm, using OpenSUSE boot disk and chmod. The system starts again. But that annoying bootchart is still there. I will not try again to remove it. First I will try to figure out, how to disable it during the boot process. Hopefully with your help ;)

    Read the article

  • Add IPv6 support to DirectAdmin server

    - by George Boot
    I just set up an new DirectAdmin, and I want to prepare it for IPv6 use. My ISP have gave me an range of IPv6 addresses that I can use. Lets say that address is 2a01:7c8:**:1f::. My neworkadapter user DHCP to resolves its IP-addresses. When i type ifoncig eth0 I get the following result: eth0 Link encap:Ethernet HWaddr 52:**:**:**:ce:f3 inet addr:37.**.**.44 Bcast:37.**.**.255 Mask:255.255.255.0 inet6 addr: 2a01:7c8:****:1f::/64 Scope:Global inet6 addr: fe80::5054:ff:fe87:cef3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38941 errors:0 dropped:0 overruns:0 frame:0 TX packets:29439 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3779534 (3.6 MiB) TX bytes:5089379 (4.8 MiB) As you can see, I have an IPv6 address set, but I can't ping6 an IPv6 host. I get the error: connect: Network is unreachable. I decided that I needed an gateway, so I tryed to add one: ip -6 route add default via 2a01:7c8:****::1 dev eth0 (2a01:7c8:**::1 is the gateway of my ISP). But it trows an error: RTNETLINK answers: No route to host. Does somebody know what to do, and how to solve this issue? Thanks a lot!

    Read the article

  • Linux DHCPD Mac-Address based Groups

    - by GruffTech
    Our Current DHCPD.conf looks like the following. subnet 10.0.32.0 netmask 255.255.255.0 { range 10.0.32.100 10.0.32.254; option subnet-mask 255.255.255.0; option broadcast-address 10.0.32.255; option domain-name-servers 208.67.222.222,208.67.220.220; option routers 10.0.32.5; host Dev-ABaird-W { hardware ethernet 00:1D:09:3E:49:13; fixed-address 10.0.32.94; } ... more static hosts .... } About as basic as it gets. The old router is 10.0.32.1, our company wanted to implement a squid proxy to better monitor web traffic while at work, and if necessary block large time-wasters, IE Facebook.com. However, we've quickly realized that this change has played a mean prank on our Polycom SIP Phones. Occasionally our phones will not ring, the end recipient hears ringing (this is artificially created by our PBX) however the handset never rings. The ONLY thing that has changed in our network is the option routers line. So, Since all Polycom MAC addresses begin with 00:04:F2 would it be possible in DHCP to say any 00:04:F2:::* MAC addresses get option routers 10.0.32.1, and anything else must talk with our Gateway?

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • MacBook Pro (OSX Lion) - shutdown automatically before reaching login screen

    - by mkk
    When I try to lunch my MacBook Pro I can see a progress bar on loading screen. It goes to 1/15 or something like this and then it shut downs - I cannot reach even login screen. It happened to me 2 months ago, I have 'fixed' this by formatting my hard drive and installing OSX (Lion) again. This time I think that situation is a little bit different - I am able to enter single-user mode by pressing cmd + s. I then type /sbin/fsck -yf, I get the error: ** Checking Journaled HFS Plus volume. The volume name is Macintosh HD ** Checking extents overflow file. ** Checking catalog file. Invalid node structure (4, 24704) ** The volume Macintosh HD could not be verified completely. /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8 but when I type exit, I can the login screen and I can log in. I tried a lot of things, booting from recovery partition and choosing disk utility to repair the disc, but I get error that it cannot be repaired. I have googled for hours and the only real solution I have found was to buy Disc warrior that might fix the issue. Any other suggestions? Secondary question is what causes this issue? I thought the reason are bad sectors, but Smart Utility haven't found any. I found suggestion that RAM could cause this kind of issue as well, so I downloaded rember and made memory test - all tests passed. Right now I have used my solution of entering single-mode user and then typing exit, however I am not sure how long it will 'work'. Of course I have back-uped what I considered important. Thanks for the help in advance! UPDATE: I guess Smart Utility was not very useful, I mnaged to get input/output error, which I believe is equivalent to bad sector.

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • Debootstrap Ubuntu over NFS leads to mknod I/O error

    - by Aaron B. Russell
    Hi everyone, I'm trying to prepare an Ubuntu environment for a diskless machine that will PXE boot and mount an NFS share as it's root. I've currently got another Ubuntu machine mounting the NFS share and I'm trying to debootstrap into it, but it has trouble creating devices over NFS: root@kimiko:~# mount | grep Seiuchi 192.168.0.203:/mnt/user/Seiuchi on /mnt type nfs (rw,addr=192.168.0.203) root@kimiko:~# debootstrap --arch i386 maverick /mnt http://gb.archive.ubuntu.com/ubuntu/ mknod: `/mnt/test-dev-null': Input/output error E: Cannot install into target '/mnt' mounted with noexec or nodev My NFS rule on the unRAID server is 192.168.0.201/32(rw,no_root_squash,sync). I don't have the noexec or nodev options set. I've not got much experience with NFS, so I'm probably missing something basic in the way I'm sharing this, but my attempts at Googling for an answer isn't really turning anything useful up. Does anyone have suggestions on what I might have missed or maybe relevant docs? Edit: Creating normal files (and directories) works just fine, I just can't create devices... root@kimiko:/mnt# mkdir foo root@kimiko:/mnt# cd foo root@kimiko:/mnt/foo# touch bar root@kimiko:/mnt/foo# mknod quux c 4 64 mknod: `quux': Input/output error root@kimiko:/mnt/foo# ls bar

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • Forcing a particular SSL protocol for an nginx proxying server

    - by vitch
    I am developing an application against a remote https web service. While developing I need to proxy requests from my local development server (running nginx on ubuntu) to the remote https web server. Here is the relevant nginx config: server { server_name project.dev; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass https://remote.server.com; proxy_set_header Host remote.server.com; proxy_redirect off; } } The problem is that the remote HTTPS server can only accept connections over SSLv3 as can be seen from the following openssl calls. Not working: $ openssl s_client -connect remote.server.com:443 CONNECTED(00000003) 139849073899168:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 226 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Working: $ openssl s_client -connect remote.server.com:443 -ssl3 CONNECTED(00000003) <snip> --- SSL handshake has read 1562 bytes and written 359 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 1024 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 Cipher : RC4-SHA <snip> With the current setup my nginx proxy gives a 502 Bad Gateway when I connect to it in a browser. Enabling debug in the error log I can see the message: [info] 1451#0: *16 peer closed connection in SSL handshake while SSL handshaking to upstream. I tried adding ssl_protocols SSLv3; to the nginx configuration but that didn't help. Does anyone know how I can set this up to work correctly?

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >