Search Results

Search found 5727 results on 230 pages for 'routed commands'.

Page 183/230 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Connect trough remote computer connection

    - by Didac
    First, sorry for my english and my poor knowlodge of this subject. I have a dedicated server placed in Germany (windows 2008 R2) and I live in spain. I would like to access internet from my home computer (Windows 7 Pro x64), trough my server in Germany, so I can use a German IP, what I need some times. I have complete acces in to both computers, but I just don't know where to start. (My knwoledge is limited to software development :/ ) I'd like to know where to start, if I need to create a VPN and so.. Thanks in advance! Update 1 I tried a lot of options of OpenVPN, but I sadly I know nothing abuot networking, so I have to accept I do not know what I'm doing :( Here are my config files (note most of the options are from the sample config files). server.conf #server config file start port 1194 proto udp dev tun server 10.0.0.0 255.255.255.224 #you may choose any subnet. 10.0.0.x is used for this example. ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.key" dh "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\dh1024.pem" push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" #the following commands are optional keepalive 10 120 comp-lzo persist-key persist-tun verb 5 #config file ends client.conf #client config file start client dev tun proto udp remote 176.9.99.180 1194 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.key" ns-cert-type server comp-lzo verb 5 explicit-exit-notify 2 ping 10 ping-restart 60 route-method exe route-delay 2 # end of client config file And here's the server's network settings: IP address: 176.9.99.180 Subnet mask: 255.255.255.224 Default gateway: 176.9.99.161 Preferred DNS server: 127.0.0.1

    Read the article

  • File permission woes on an Ubuntu ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu sudo:x:27:me www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo. So I chown -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com Edit : It appears I had some alias messing with my ls command. By calling \ls -l I can see that all my files are in the correct group.

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Hearing a clicking noise from soundcard all the time

    - by Mehrdad
    I have installed Fedora 17 on my laptop. A few days ago I updated my fedora (but not upgraded). I shut down my computer and since the next time I turned it on I am hearing a clicking noise all the time from speakers. Even when I plug my headphones in I hear the noise through the headphone. I surfed over the internet and found the following shell commands: su -c 'echo "options snd_hda_intel power_save=0" /etc/modprobe.d/snd_hda_intel.conf' su -c 'echo 0 /sys/module/snd_hda_intel/parameters/power_save' I tried them but they didn't work. Here is the part of "lspci" command related to my sound-card: 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) I have to add that my sound-card is working and I can play some audio file, I mean I can hear the voice and noise simultaneously. But everything is OK in windows xp which is also installed on my laptop. Could it be related to the sound-card driver? If so, how can I revert it to the previous version?

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Cisco 851 (IOS) router: FastEthernet 4 (WAN) got the shutdown flag.

    - by cjavapro
    At a customer location there was a Cisco 851 router (which uses IOS). The PCs on location were all of a sudden unable to connect. We came on site and found that FastEthernet 4 (the WAN port) was "administratively down". We ran these commands to resolve it config t interface fa4 no shutdown exit exit write Now the mystery is how the shutdown flag got there in the first place? The router was on battery backup... but during the outage it was power cycled by the customer. It is possible that there was a short outage by the ISP and that the power cycle caused the shutdown flag to come up. There may have been a hack or an attack pattern that caused the shutdown flag to come up. There may have been a hack or an attack pattern that the router to become unavailable and then caused the shutdown flag to be added on startup. Question: Does anybody have any clues? or at least remember that they had a shutdown flag come up on their WAN port also?

    Read the article

  • How can I create a VLAN on my extreme switch for a seperate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

  • Booting Ubuntu as VM with KVM on Ubuntu 12.04

    - by CrazycodeMonkey
    I am trying to boot my very first VM using KVM. I have Ubuntu 12.04 installed, i made sure the BIOS had the right virtualization flag enabled for intel processor by running kvm-ok. I have researched this on google and all the instructions that i have found so far are outdated. for e.g. most instructions talk about booting a virtual machine with the following commands qemu-img create -f qcow2 foo.img 100G --- create a virtual disk for your VM kvm --name foo -m 1024 -hda foo.img -cdrom whatever.iso -boot d -- This runs kvm. This command line is incomplete. First you need to be root to run this. Second- it is missing option for the video device. When you run this command you get the following error "Could not initialize SDL(No available video device) - exiting" Googled this error and looked it up on stackover flow http://stackoverflow.com/questions/4841908/sdl-init-failure-reason-is-no-available-video-device The answer provided here does not work on Ubuntu 12.04 Googled this problem further and found out that i need to specify a video device so I finally ran the following command sudo kvm --name mymachine -m 8096 -hda myimage.img --cdrom ubuntu.iso -boot d -vga cirruss -k en-us -vmc :0 This was after I had created the myimage.img image on the drive. Now this command does not give me an error but it just hangs. Does anyone have clear instructions on how to run a VM using KVM on Ubuntu?

    Read the article

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • Plesk FTP not working but SFTP and Shell is working

    - by shamittomar
    I am facing a strange problem. The FTP on my Plesk VPS is not working. Whenever I try to connect, FileZilla FTP client says: Status: Resolving address of xxxxxxxxxxxxx.com Status: Connecting to xxx.xxx.xxx.xxx:21... Status: Connection established, waiting for welcome message... Error: Could not connect to server So, it's not even going to the step of asking username/password. So, it's something else. The SFTP on port 22 is working fine. Also, I can successfully do shell access and run commands. But, I NEED FTP access too on port 21. I have searched everywhere but can not find any setting to enable it. This is the Plesk version info: Parallels Plesk Panel version 9.5.2 Operating system Linux 2.6.26.8-57.fc8 CPU GenuineIntel, Intel(R) Pentium(R) 4 CPU 3.00GHz Any help is appreciated. [EDIT]: The firewall is not blocking it. I have checked it on server and there are absolutely no blocking rule. Firewall states: All incoming/outgoing connections are accepted on FTP And on client-side (my PC), I can connect to other FTP servers so this is not an issue in my PC's firewall. Moreover, I can not even connect to the FTP from online FTP clients like net2ftp.

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

  • How to disable monitor "sleep" on Ubuntu without access to X?

    - by exhuma
    I just received a CuBox (basically a tiny ARM based PC). It comes pre-installed with Ubuntu, and I did not (yet) want to fiddle with the OS itself. My aim is to have it automaticall start a browser in fullscrren upon boot. Using chromium with the "--kiosk" flag works perfectly in that regard. But now I have the problem that the screen turns off after a certain time. I managed to turn off the screen saver using: gconftool-2 -s /apps/gnome-screensaver/idle_activation_enabled --type=bool false And tried to turn off the power management using: gconftool-2 -s /apps/gnome-power-manager/ac_sleep_display --type=int 0 and gconftool-2 -s /apps/gnome-power-manager/timeout/sleep_display_ac --type=int 0 Neither of the power-management commands worked. Theoretically I could hook up a mouse and keyboard and configure it manually. But I want to learn how to do it over the console. The box will eventually be only reachable via SSH. So I'd like to be able to trouble-shoot it later. I don't quite know where to look for. I searched the gconf tree using gconftool-2 -S for anything related to the terms power, idle and sleep but did not find anything promising. Maybe it's not even gconf related... Any ideas what else I could look for?

    Read the article

  • SSH Connection Error : No route to host

    - by dewbot
    There are three machines in this scenario: Desktop A : [email protected] Laptop A : [email protected] Machine B : [email protected] All the machines have Ubuntu 11.04 (Desktop A is a 64bit one) and have both openssh-server and openssh-client. Now when I try to connect Desktop A to Laptop A or vice-versa by ssh [email protected] I get an error as port 22: No route to host in both the cases. I own both the machines, now if I try same commands from my friend's machine, i.e. via Desktop B, I can access both my Laptop and Desktop. But if I try to access Desktop B from my Laptop or by Desktop I get port 22: Connection timed out I even tried changing ssh port no. in ssh_config file but no success. Note: that 'Laptop A' uses WiFi connection while 'Machine A' uses Ethernet Connection and 'Machine B' is on an entirely different network. Laptop A && Desktop A - Router/Nano_Rcvr provided to me by ISP. So to one Router two Machines are connected and can be accessed at the same time. here is my ifconfig output for both the machines :- Laptop wlan0 Link encap:Ethernet HWaddr X:X:X:X:00:bc inet addr:1.23.73.111 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::219:e3ff:fe04:bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:108409 errors:0 dropped:0 overruns:0 frame:0 TX packets:82523 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:44974080 (44.9 MB) TX bytes:22973031 (22.9 MB) Desktop eth0 Link encap:Ethernet HWaddr X:X:X:X:c5:78 inet addr:1.23.68.209 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::227:eff:fe04:c578/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10380 errors:0 dropped:0 overruns:0 frame:0 TX packets:4509 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1790366 (1.7 MB) TX bytes:852877 (852.8 KB) Interrupt:43 Base address:0x2000

    Read the article

  • Having trouble with a workaround, for booting from a usb stick, using grub and a minimal linux kernel to load usb drivers

    - by s hanley
    I'm trying to boot from a usb stick. I formatted it to fat32, and later to ext2, and installed dsl on it using unetbootin, and later the usb install guide on dsl wiki (http://www.damnsmalllinux.org/wiki/index.php/Install_to_USB_From_within_Linux). The bios doesn't have a setting for booting from usb. Grub doesn't "see" the usb drive when I use the root and find commands, explained in (http://www.damnsmalllinux.org/wiki/index.php/USB_Booting). This happens even when I set boot from floppy at the top of the boot order. However, my usb keyboard is recognised by the bios and by grub. How can it recognise the keyboard but not the usb drive? Also, the usb led does flash even before grub starts up, so surely something must be happening usb-wise? I am now following an ubuntu guide to booting from a USB stick, using a hdd-based, minimal linux kernel to supply the usb drivers. But I'm having difficulty adapting it to other OSes (slax/dsl/aptosid). I believe I have to alter the initrd.gz file to include usb drivers and then copy that file along with vmlinuz to a partition on my hdd. But, what's the grub command for the kernel line supposed to look like? From the ubuntu example it's: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper noprompt cdrom-detect/try-usb=true persistent initrd /boot/usb-boot/initrd.lz boot Should mine just be: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz cdrom-detect/try-usb=true initrd /boot/usb-boot/initrd.lz boot

    Read the article

  • Can't run utilities/.exe's that use the network from a [DFS] windows share on Windows 2008 servers. Can this be overcome?

    - by Jim Lawhon
    Under Windows Server 2008 I'm unable to run many utilities that use network resources. This works just fine under Windows Server 2003. For example: \\domain\dfs\tools$\bin\sendmail.exe ... \\domain\dfs\tools$\bin\psexec.exe ... echo %_metric% %_value% %_unixtime% | \\domain\dfs\bin\foo$\nc graphite.domain 2003 -w1 Reproducing and maintaining this folder on a large number of servers/vm's is not desirable. Is there a way to allow Windows Server 2008 to run these tools? If so, can this be enabled via GPO or in a fashion that can be scripted during automated builds? Edit: The commands/tools do work just fine, when run from local drives. Edit2: Wget example: d:\scripts\helpers>z:\bin\wget http://www.google.com SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = z:/etc/wgetrc --2011-04-11 00:32:15-- http://www.google.com/ Resolving www.google.com... failed: Host not found. z:\bin\wget: unable to resolve host address `www.google.com' wget can neither use DNS to resolve the IP nor can it use HTTP if provided an IP directly. Edit3: The problem seems to be tied to DFS/DFS shares. Tools run correctly from other normal windows-server file-shares. They also run correctly when run directly from the file-servers behind the DFS. They only fail when we attempt to run them from the DFS UNC path or mapped drives.

    Read the article

  • How to kill ostensibly immortal process?

    - by DeeDee
    I had some huge file transfers operating on an NFS mount. The server on which the mount point resided was carelessly rebooted, and now the server from which these large transfers were initiated seems to be bogged down by them. If I run top, I see the following: The first thing I tried was to run kill with each the -1 -2 -9 and -15 flags, and each of the process ids shown above in turn. This allowed me to proceed, but didn't kill the processes. The next thing I attempted was to reboot the server, but neither reboot nor shutdown -r now worked. When I ran shutdown -r now the standard broadcast message was sent out, but the sever did not reboot. I confirmed this by looking at the server uptime, which was 25 days. So now I'm a little stuck. I'm running these commands as root. EDIT: Here's another interesting tidbit: In top, I don't see that any other processes are using more than a fraction of a percent of memory or more than 5% of CPU. EDIT 2: output of /var/log/messages

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

  • need to bring back win 7

    - by user290513
    I like making music and playing games and occasionally do some Photoshop. I had a windows 8 computer but my mouse pointer always got stuck, so to try out something new I installed Ubuntu. here is how I installed it: Went to advanced statup options clicked on "use a device" after plugging in my bootable USB with Ubuntu replaced my windows 8 and installed Ubuntu 14.04 LTS I hope I did it correctly though. So after a few months I could've really find out a good Audio Production (not LMMS, because I use Stagelight) software nor something that could be familiar to the UI of Photoshop. So I decided to bring back Windows, but because of the bad experience of 8 I thought about bringing back win 7 So I used an app named WinUSB to make my bootable USB drive after formatting it to NTFS in GParted But when I go to my grub menu, my USB doesn't show up and my PC being a UEFI device. I don't know how to get to the bios of my device. Can somebody tell how to install Windows 7 completely and deleting Ubuntu or at least give me a link to a tutorial. I have a netbook: it is an Acer Aspire One 725. I'm fine with using commands in terminal and another thing that my laptop doesn't have a CD drive or reader, I can't put a CD inside

    Read the article

  • Is there a local yubnub.org replacement?

    - by Justin Keogh
    I use yubnub very often... every google search I do by just (in firefox) "ctrl-t" - (now in the url bar) "y g searchterms" [Enter] "y" in this case is a search keyword I added by right clicking in the yubnub.org command box it's really fast, and I just do it automatically now... but the problem is now I am stuck with whatever the yubnub command that I am so used to using does. I cant change it... for example, what if I dont want to use google... but I still want to use the "g" command to search? or say I want to use google's https search... ect... I suppose this would be kinda trivial to implement locally... but I would hate to re-invent the code if it's allready done and in use... ideas? Also a local yubnub.org replacement would save me the DNS lookup and traffic to yubnub.org. I dont expect to be able to import all commands from yubnub.org but that would be cool if possible.

    Read the article

  • Route web traffic through a separate iterface

    - by tkane
    I'd like to route web traffic through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • what means parameter -mailboxcredenctial

    - by cotablise
    H3llo, I am writing regarding the Exchange powershell commands. When I want to use following cmdlets, I have to insert parameter -mailboxcredential Test-OwaConnectivity Test-OutlookWebServices Test-ImapConnectivity Test-PopConnectivity In the Microsoft official site is written: "The MailboxCredential parameter specifies the mailbox credential for a single URL test." I am not sure why this parameter is needed... I inserted incorrect credentials, however the command was finished successfully... Could you tell me reason why this parameter is needed ? Example: Wrong/incorrect credential [PS] C:\>Test-WebServicesConnectivity -ClientAccessServer EXhub1 -MailboxCredential (Get-Credential blablabla) CasServer LocalSite Scenario Result Latency(MS) Error --------- --------- -------- ------ ----------- ----- EXhub1 Default-Fi... GetFolder Failure [System.Net.WebExcept... Without parameter: [PS] C:\>Test-WebServicesConnectivity -ClientAccessServer EXhub1 WARNING: Test user 'extest_91ef41d34eef4' isn't accessible, so this cmdlet won't be able to test Client Access server connectivity. Could not find or sign in with user ********\extest_91ef41d34eef4. If this task is being run without credentials, sign in as a Domain Administrator, and then run Scripts\new-TestCasConnectivityUser.ps1 to verify that the user exists on Mailbox server EXHUB1.****** + CategoryInfo : ObjectNotFound: (:) [Test-WebServicesConnectivity], CasHealthCouldN...edInfoException + FullyQualifiedErrorId : FB9A14B6,Microsoft.Exchange.Monitoring.TestWebServicesConnectivity WARNING: No Client Access servers were tested. Thank you in advance

    Read the article

  • How to use a common library of environment variables among different languages?

    - by JDS
    We have three main languages with which we perform system tasks: Bash, Ruby, and PHP, and Perl. Four, four main languages. We use managed environment variables to provide authorization info that automated scripts need. For example, a mysql user account and password. We'd like to use one single managed file to maintain these variables. In some instances, for example, in cron, these environment variables are not available. They are made available in CLI scripts because we source the env file in everyone's profile. But something like cron doesn't do that. On the CLI, when the env file is sourced, any given script can access those variables. Bash has them directly, PHP in $_ENV, ruby in ENV, etc. We can't source the file into non-Bash scripts, because most languages implement shell commands by running them in a subshell. We considered parsing the Bash, converting to the script's lang, and running the equivalent of "exec(parsed_output)" on the resulting strings. What is a good solution to providing managed environment vars to scripts running in cron, or similar?

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >