Search Results

Search found 6433 results on 258 pages for 'trouble shooting'.

Page 227/258 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • pfSense 2.1 OpenVPN client not using tunnelled interface

    - by Brian M. Hunt
    I'm having some trouble getting OpenVPN working on my pfSense box. The issue is quite strange to me. When I have the OpenVPN turned on, only my router is able to connect to the Internet. From the router I can use ping, links, etc., and connections work exactly as expected - through the VPN, with the IP address assigned by my VPN provider (Proxy.sh, incidentally). However, none of the clients on the local network can connect to the Internet. I get timeouts when using ping or a web browser. I can ping my router, and the IP address of the gateway. When I switch the default gateway from the VPN to my ISP's gateway, all works exactly as expected. Here the routing table (netstat -r) when in VPN mode, and a key for it: IPv4 Destination Gateway Flags Refs Use Mtu Netif Expire 0.0.0.0/1 10.XX.X.53 UGS 0 122 1500 ovpnc1 = default 10.XX.X.53 UGS 0 235 1500 ovpnc1 8.8.8.8 10.XX.X.53 UGHS 0 82 1500 ovpnc1 10.XX.X.1/32 10.11.0.53 UGS 0 0 1500 ovpnc1 10.XX.X.53 link#12 UH 0 0 1500 ovpnc1 10.XX.X.54 link#12 UHS 0 0 16384 lo0 ZZ.XX.XXX.0/20 link#1 U 0 83 1500 re0 ZZ.XX.XXX.XXX link#1 UHS 0 0 16384 lo0 127.0.0.1 link#9 UH 0 12 16384 lo0 128.0.0.0/1 10.11.0.53 UGS 0 123 1500 ovpnc1 192.168.1.0/24 link#11 U 0 1434 1500 ue0 192.168.1.1 link#11 UHS 0 0 16384 lo0 YYY.YYY.YYY.YYY/32 ZZ.XX.XXX.1 UGS 0 249 1500 re0 IP addresses 10.XX.X.53/54 - My DHCP-assigned IP address/pair from the VPN provider ZZ.XX.XXX.XXX - My external IP assigned by my ISP YYY.YYY.YYY.YYY - The external IP assigned by the VPN provider Interfaces ovpnc1 - My VPN client interface re0 - My LAN interface ue0 - My WAN interface This looks essentially what I would expect it to be. The default route is through the VPN provider. The VPN address is routed through the ISP-assigned IP address. I am not sure what would be wrong here. So figuring this was a firewall issue, I basically tried enabling all in/out traffic. This did not seem to remedy the problem. Also figuring it could possibly be some client networking issue, I restarted the clients on the LAN. This did not help. I also ran route flush and reset the routes manually. So I am a bit stumped, and would be very grateful for any thoughts on what the problem might be.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • Weird random application hang problem

    - by haridsv
    I am trying to understand an application hang problem that started up lately on my windows xp system. The system runs fine for days (sometimes) without ever shutting down or putting it to sleep, but the problem first shows up as one of the apps hanging. The application's UI stops responding or one or more background threads hang, so even though the GUI is responding, it is not doing anything (e.g., in VirtualDub's case, the UI responds fine, but the job doesn't progress and I won't even be able to abort it). The weirdness part comes from the fact that if I try to kill such an app, the program that is used to kill it goes into the same mode (i.e, that hangs instead of the original). E.g., if I use Process Explorer to kill it, the original program exits, but procexp now hangs. If I use another instance of procexp to kill the one that is hanging, this repeats, so there is always at least one program hanging in that state. This is not specific to procexp, I tried the native task manager and even the "End Process" dialog from windows explorer that shows up when you try to close a non-responsive GUI (in this case, the explorer itself hangs). The only program that didn't hang after the kill, is the command line taskkill. However, in this case, explorer hangs instead of taskkill. Also, once this problem starts manifesting, it soon ends up freezing the whole system to the extent that even a clean shutdown is not possible, so I have learned to reboot as soon as I notice this problem, however this is very inconvenient, as I often have encoding batch jobs going on which can't continue the job after the restart. The longer I leave the system running after seeing this problem, the more applications get into this state. I have tried to do a repair install but that didn't make any difference. I also uninstalled some of the newer installs, but again no difference. I tried to search online, but got inundating results for generic hang and crash related problems. Though I couldn't notice any pattern, it seems as though the problem is more frequent if I have some video encoding going on at that time. I had the system running for days when I only do browsing and internet audio/video chat before I decide to start encoding something and the problem starts to show up. I am not too sure if it is the encoding program that first hangs, though I almost always noticed that too hanging (like the VirtualDub stopping to make progress). I also had to reboot 3 times on one day when I was heavily experimenting with encoding. I would appreciate any help in narrowing down this problem and save me the trouble of reinstalling. I don't especially want to loose my gotd installs.

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • Secure ldap problem

    - by neverland
    Hi there, I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • Gateway GT5220 Boot/POST Failure

    - by John Rudy
    I have a Gateway GT5220 I'm troubleshooting. It is, in fact, the machine I just gave my father for his birthday a couple months ago. (Prior to that, it was my home PC. My home PC is now the MacBook on which I'm writing this.) Before going any further, I suspect that the answer will be, "It's worse than that, it's dead, Jim, it's dead, Jim, it's dead, Jim." At least, mobo and/or CPU. The initial symptoms were as follows: Turn on power All fans fire up (thus making it so I can't hear if the hard drive is spinning or not, nor are my hands sensitive enough anymore to feel it) No LEDs remained lit on the front panel. (Initially, the hard drive indicator flashed briefly.) No beep, no video, no nothing. Following some advice I found here, I tried to "drain the stored power." After following those steps, the new symptoms were: Turn on power All fans fire up The front panel LEDs remained lit! After about 20, maybe 30 seconds, we had video! Sort of. We got to the Gateway splash/POST screen, which appeared thoroughly corrupted. How corrupted? Well, I imagine it's what a POST screen would look like after reading the wrong passage out of the Necronomicon: It stayed there. I gave it at least 5, maybe 6 minutes, and it didn't move. So I shut her down, started her up again, and now (this is where we currently stand, symptomatically) we have this: Turn on power All fans fire up The front panel LEDs remain lit No video, no beep, no nothing. I'm a software guy; haven't done real hardware troubleshooting in years. My gut tells me that the mobo and/or CPU is fried, and unfortunately my gut didn't get to be as big as it is being wrong all the time. :( In addition to the link above, I have read all of the following (trying to save you some LMGTFY trouble): Gateway Support POST Error Messages and Handling About a zillion (useless) POST beep code sites A kioskea.net post indicating that most likely we're at what I consider "total loss" (mobo and/or CPU) My questions: Are there any conditions other than mobo/CPU that could cause symptoms like these? Is it worth my time to try the next hardware troubleshooting step?(IE, remove all non-critical hardware from the machine, try to boot, systematically replace one by one until we find the failing component) Which mobos will fit in the Gateway GT5220 case (with rear ports correctly aligned)? (Why this is not a dupe: I wouldn't have posted this question if it hadn't been for the funkadelic possessed video display on the one occasion we got video out. I think that justified this not being an exact dupe. Of course, if the community overrules, I will understand.)

    Read the article

  • What are some of the best wireless routers for a price-conscious home power-user?

    - by Alain
    I'm extremely dissatisfied with the 'popular' choice for routers in homes and small offices. They are expensive (upwards of 60$), lack a great deal of useful configuration options, and seem to need to be restarted quite often. (Linksys comes to mind). I've been on the market for a good router lately, and slowly collecting a set of requirements I feel good routers should meet. Maximum number of TCP/IP connections. - This isn't something I see any routers advertise, but in terms of supporting torrent applications, I've been screwed by routers that support less than 20 here. From what I understand a fairly standard number is 200, but there are not so expensive routers that support thousands. Router configuration menu - Most have standard menu's that let you set up basic things like your wireless network encryption settings, uPnP, and maybe even DMZ (demilitarized zones). An absolute requirement for me, however, are routers with good enough firmware to support: Explicit Port forwarding Assigning static local ips to specific mac addresses, or at least Port forwarding by MAC address Port, IP and MAC filtering Dynamic DNS service for home users who want to set up a server but have a dynamic IP Traffic shaping (ideally) - giving priority to packets from certain machines or over certain ports. Strong wireless signal - If getting a reliable signal requires me to be so close to the router that I can connect an Ethernet cable, it's not good enough. As many Ethernet ports as possible. - Because I want to be able to switch from console gaming to PC gaming without visiting my router. So far, the best thing I've stumbled upon (in the bargain bin at staples) was a 20$ retail plus router. It was meant to be the cheapest alternative until I could find something better to purchase online, but I was actually blown away by the firmware capabilities. It supports defining reserved bandwidth for certain network traffic, dynamic DNS, reserving local IPs for specific MAC addresses, etc. At 2 am when my roommate is killing our Internet with their torrents, I can limit their bandwidth without outright blacklisting them. I have, however, met serious limitations when it comes to network traffic between local machines. It claims a 300Mbps connection, but I have trouble streaming videos from my PC to my console or other laptops wirelessly. It has a meltdown and needs to be reset once in a while (no more than a couple times a month), and it's got a 200 connection limit. There 4 Ethernet ports in the back but I'm pretty sure the first doesn't work. So some great answers to this question would be: Any metrics you use to compare routers, and requirements you have for new candidates. The best routers you've found for supporting home servers, file management systems, high volume torrent traffic, good price/feature ratio, etc. Good configuration advice (aside from 'use Ethernet whenever possible') Thanks for your feedback and experiences!

    Read the article

  • Varnish + Plesk : vhost broken

    - by Raphaël
    I have an e-commerce site with 300,000 products and 20,000 categories. It is slow and currently in production. I decided to install Varnish to speed up. The trouble is that during installation, I got a Guru Meditation. Since the site is in production, I am not allowed to leave this error more than a second, thinking to have made an enormous stupidity. I followed the following tutorial: http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial I'm sure I followed all without error. I say that there may be a specific configuration with plesk. Has anyone already installed Varnish on a ubuntu 11.04 server with plesk 10? Does anyone have a better resource? I know it is "very vague" as an error, but maybe some of you have had this problem. edit 24/11/2011 I continued to work on Varnish + Plesk ... but it still does not work. 1) I changed the port for apache in plesk General # mysql -uadmin -p`cat /etc/psa/.psa.shadow` -D psa -e'replace into misc (param, val) values ("http_port", 8008)' 1.1) I rebuild the server conf # /usr/local/psa/admin/bin/httpdmng --reconfigure-all 2) I changed the apache conf files (if those were not taking full plesk top) vim /etc/apache2/ports.conf NameVirtualHost *:8008 Listen 8008 2.1) I do the same with /etc/apache2/sites-enables/000-default 3) I changed the port of my vhost (a single server) vim /var/www/vhosts/MYDOMAIN.COM/conf/XXXXXXXXX.http.include Replace the port 80 by this I want. Rebuild the vhost conf /usr/local/psa/admin/sbin/websrvmng --reconfigure-vhost --vhost-name=<domain_name> with without www (See my issue in serverfault: Edit vhost port in plesk 10.3 ) 4) I installed varnish by following this tutorial : http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial 5) I restart apache 2 + varnish service apache2 restart service varnish restart When I go to my site, I come across a page of apache It works! This is the default web page for this server. The web server software is running but no content has been added, yet. Can somebody help me ? This means that my vhost does not point to the right place. Why? What to do? How?

    Read the article

  • Issues connection to Ubuntu via PuTTy

    - by user1787262
    I'm not sure this is the appropriate stack exchange site to post this question on. If not, please flag this for migration. I am trying to use PuTTy ssh into my ubuntu machine which is wirelessly connected to the same network. I originally ran ifconfig to get my ubuntu machines private network IP address. I then verified that ssh was running, I even ssh'd into my school network and then into the ubuntu machine itself. No problems yet. On my windows 8 machine I ran ipconfig to get my private network IPv4 address. I then pinged my ubunty machines IP and 100% of packets were received. I figured, "OK we are ready to use PuTTy to connect to my Ubuntu Machine". Keep in mind this was my first time using PuTTy. I tried entering the IP of my ubuntu machine in the PuTTy Config GUI but I got a connection timeout. At this moment I don't really know what's going on, SSH is running on port 22 of my Ubuntu machine and I can ping the machine why is it not connecting? (I tried [username]@ip too). So I went on my Ubuntu machine and ran nmap -sP 192.168.0.1/24 and found that my windows machines IP did not show up, the host is down. I'm at a lost in something I am not very familiar with. Would anyone be able to help me or direct me to some resources that would trouble shoot my problem? Thank you EDIT (ADDITION): tyler@tyler-Aspire-5250:~$ nmap -v 192.168.0.123 Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 01:56 MDT Initiating Ping Scan at 01:56 Scanning 192.168.0.123 [2 ports] Completed Ping Scan at 01:56, 3.00s elapsed (1 total hosts) Nmap scan report for 192.168.0.123 [host down] Read data files from: /usr/bin/../share/nmap Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 3.14 seconds tyler@tyler-Aspire-5250:~$ nmap -Pn 192.168.0.123 Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 01:56 MDT Nmap scan report for 192.168.0.123 Host is up (0.022s latency). Not shown: 998 filtered ports PORT STATE SERVICE 2869/tcp open icslap 5357/tcp open wsdapi Nmap done: 1 IP address (1 host up) scanned in 72.51 seconds

    Read the article

  • Use launchctl to fire an AppleScript script periodically

    - by Daktari
    I have written an AppleScript that lets me back up a particular file. The script runs fine inside AppleScript Editor: it does what it's supposed to do perfectly. So far so good. Now I'd like to run this script at timed intervals. So I use launchctl & .plist to make this happen. That's where the trouble starts. the script is loaded at set interval by launchctl the AppleScript Editor (when open) brings its window (with that script) to the foreground but no code is executed when AppleScript Editor is not running, nothing seems to be happening at all Any ideas as to why this is not working? -- After editing (as per Daniel Beck's suggestions) my plist now looks like: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <false/> <key>Label</key> <string>com.opera.autosave</string> <key>ProgramArguments</key> <array> <string>osascript</string> <string>/Users/user_name/Library/Scripts/opera_autosave_bak.scpt</string> </array> <key>StartInterval</key> <integer>30</integer> </dict> </plist> and the AppleScript I'm trying to run: on appIsRunning(appName) tell application "System Events" to (name of processes) contains appName end appIsRunning --only run this script when Opera is running if appIsRunning("Opera") then set base_path to "user_name:Library:Preferences:Opera Preferences:sessions:" set autosave_file to "test.txt" set autosave_file_old to "test_old.txt" set autosave_file_older to "test_older.txt" set autosave_file_oldest to "test_oldest.txt" set autosave_path to base_path & autosave_file set autosave_path_old to base_path & autosave_file_old set autosave_path_older to base_path & autosave_file_older set autosave_path_oldest to base_path & autosave_file_oldest set copied_file to "test copy.txt" set copied_path to base_path & copied_file tell application "Finder" duplicate file autosave_path delete file autosave_path_oldest set name of file autosave_path_older to autosave_file_oldest set name of file autosave_path_old to autosave_file_older set name of file copied_path to autosave_file_old end tell end if

    Read the article

  • samba 3.5 "force user" doesn't seem to be sticking

    - by myCubeIsMyCell
    After installing a new OS with newer version of samba, I'm having trouble accessing my shares. I can browse to the specific share, but only to the top level. As best I can tell from the logs, it seems the "force user" in the samba config isn't sticking beyond the initial connection. Details below. I installed a new version of CentOS on my storage server. My old CentOS (4?)install had samba version 3.0.33, new CentOS is using 3.5.10. No domain/AD involved ... just home workgroup. no real security... just some shares hidden & some defined as read-only. here's my config: [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = luna security = share # logs split per machine log file = /var/log/samba/log.%m log level = 2 # max 50KB per log file, then rotate max log size = 50 winbind use default domain = Yes [strge] comment = please path = /storage browseable = yes read only = no force user = windowsguest force group = users guest ok = yes So... the problem I'm running into is that the 'force user' only seems to hold for the initial connection & I see all the top level folders fine. When I drill into a folder I get access denied - which appears to be due to my windows user info being sent (trys to authenticate xuser - a non-existant user to samba, so maps to nobody & fails). Here's the smb error msg: [2012/11/29 14:30:27.326195, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [xuser] -> [xuser] FAILED with error NT_STATUS_NO_SUCH_USER [2012/11/29 14:30:27.326251, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [nobody] -> [nobody] FAILED with error NT_STATUS_NO_SUCH_USER Most of the top level directories are 755, some 777. Either way, can not access them. If I do a chown -R windowsguest.users ... no change... but if I do a chmod -R to 777 or 755 they become browsable... but still can't create files (even for 777 ones). Not sure what role it plays if any... but had to recreate the user windowsguest under the new os install, uid & gid match old user. Seems the main issue as far as I can tell is that samba isn't maintaining the 'force user' - but I could be wildly off base. Client OS is win7 pro x64. Thanks for any suggestions or advice!

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • GPO Software Uninstall Not Taking Place

    - by burmat
    I am having some trouble with my software GPO's and can't seem to find any answers using Google. I successfully deployed software using my policy but when I delete another, the uninstallation of the software does not take place. What I did: Deployed software using a GPO, used gpupdate /force on the workstation to update, reboot, and install the software Deleted another software installation by: Right-Click All Tasks Remove 'Immediately uninstall the software from users and computers' From there, I did another gpupdate /force to try and get the GPO to refresh and uninstall the software on the workstation. This did not work. I then forced replication between my domain controllers and ran another gpupdate /force on the workstation and this did not uninstall the software. There are not error logs or indications that the uninstall is being triggered when I go into the event viewer, and I know for a fact that the policy is working in other aspects. So my questions is: Where do I look next to find the answer as to why GPO software deployments are working but un-installations are not, based off of what I have already tried? Thank you in advance. UPDATE: After using gpresult /z, there is no indication of a pending un-installation or removal of software. Under the section entitled "Software Installations", the software I am trying to uninstall is not listed. There is no other indication that the software I am trying to uninstall even exists. I also turned on RSoP logging and did (yet another) gpupdate /force to yield no blatant results. There is no indication that an uninstall event was even triggered, let alone incapability or failure. Although I am sure I marked it to uninstall in case of two events (the falling out of the scope of management, as well as the removal of the entry), I am beginning to think the entry just never triggered something that should have been triggered. UPDATE #2: After troubleshooting this (frustrating) application assignment, I have chalked it up as a fluke. I have tested with other software to make sure that the uninstall of other application assignments is actually working, so I am assuming it is something related to the package directly. There is the possibility that my problem resides in something related to what @joeqwerty linked in a comment below but because I can't go back in time, I don't think I will be able to prove it. I will probably be running a script via another GPO to guarantee the un-installation of left over package installs. For now, Evan Anderson is getting the answer because of the debugging information I was able to put to good use. Thank you to everyone that helped contribute so far!

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • How do I parse file paths separated by a space in a string?

    - by user1130637
    Background: I am working in Automator on a wrapper to a command line utility. I need a way to separate an arbitrary number of file paths delimited by a single space from a single string, so that I may remove all but the first file path to pass to the program. Example input string: /Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3 ... Desired output: /Users/bobby/diddy dum/ding.mp4 Part of the problem is the inflexibility on the Automator end of things. I'm using an Automator action which returns unescaped POSIX filepaths delimited by a space (or comma). This is unfortunate because: 1. I cannot ensure file/folder names will not contain either a space or comma, and 2. the only inadmissible character in Mac OS X filenames (as far as I can tell) is :. There are options which allow me to enclose the file paths in double or single quotes, or angle brackets. The program itself accepts the argument of the aforementioned input string, so there must be a way of separating the paths. I just do not have a keen enough eye to see how to do it with sed or awk. At first I thought I'll just use sed to replace every [space]/ with [newline]/ and then trim all but the first line, but that leaves the loophole open for folders whose names end with a space. If I use the comma delimiter, the same happens, just for a comma instead. If I encapsulate in double or single quotation marks, I am opening another can of worms for filenames with those characters. The image/link is the relevant part of my Automator workflow. -- UPDATE -- I was able to achieve what I wanted in a rather roundabout way. It's hardly elegant but here is working generalized code: path="/Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3" # using colon because it's an inadmissible Mac OS X # filename character, perfect for separating # also, unlike [space], multiple colons do not collapse IFS=: # replace all spaces with colons colpath=$(echo "$path" | sed 's/ /:/g') # place words from colon-ized file path into array # e.g. three spaces -> three colons -> two empty words j=1 for word in $colpath do filearray[$j]="$word" j=$j+1 done # reconstruct file path word by word # after each addition, check file existence # if non-existent, re-add lost [space] and continue until found name="" for seg in "${filearray[@]}" do name="$name$seg" if [[ -f "$name" ]] then echo "$name" break fi name="$name " done All this trouble because the default IFS doesn't count "emptiness" between the spaces as words, but rather collapses them all.

    Read the article

  • Optimal setup for ASUS P6X58D Premium BIOS (no OC)?

    - by rumtscho
    Normally, I'd trust the mainboard manufacturer to choose the best options as defaults. But I had trouble with the board, because even with Quick Boot enabled, it booted twice as slowly as a Pentium 4 Celeron. Then I changed lots of options at once (most of them weren't explained in the manual, just mentioned with a single sentence) and the boot time is only marginally worse than the Pentium 4 (54 sec against 46 sec from button to pw entering screen). Now I don't know if I have turned something off which should have stayed on. I guess I even won't be able to boot from a CD now, because even though it is present in the boot sequence, I took off a timeout I think it needs to check whether there is a disk in the drive. The second reason is that I don't have an internal HDD, only a SSD. I forgot my sources blush but I am under the impression that today's BIOS and OS options are geared toward booting from a HDD, which is often less than optimal when one boots from a SSD, especially when there are functions which cause avoidable writing cycles, as a SSD wears out after too many writing cycles. Most of the things I've read concern the OS, but there are some BIOS-relevant options too. I am especially confused about the disk mode. The board supports AHCI, IDE-simulation and RAID, but of the different articles I've read, there is a proponent for each and no clear arguments for any. So can one tell me which options are important in general and which are important for a SSD-only system? I don't want to overclock the CPU, so you don't have to say anything about this (yes I know the board is meant for OC:)). I am thinking of overclocking the RAM, since they sold me 1600er heatsinked modules which are running at 1066 now, but I'm not sure yet about that. The rest of the system: i7-930, Intel X25-m G2, 6 GB RAM, GTS 250, some no-name Blue-ray ROM. 2 external HDDs over USB 2.0. Lots of other USB-connected hardware (12 devices I think), no SATA 3 drives (will disabling the controller have an impact on performance?), no LAN, only WiFi. Lucid Lynx 64 bit, no dual boot, no virtual installations. The main uses of the system are: managing and playing/showing all the media stored on the external disks, lots of image manipulation, some video editing, a bit of (non-demanding) gaming, rarely development. Lots of Internet surfing too, but this shouldn't have much impact on performance.

    Read the article

  • Configuring Windows 2003 As A Router

    - by Sean M
    I am trying to configure a Windows 2003 server to act as a router, so that the two subnetworks that I'm dealing with can communicate with one another without NAT. I am mostly sure that I have configured Windows 2003 incorrectly, and I'm finding it very difficult to drill down through Google results to something helpful. I have a 192.168.1.0/24 network that is my "production" network (in the sense that I'm in trouble if I screw it up) and a 10.0.0.0/8 network that is my test network. The 192.168.1.0 network is ruled by a gateway whose routing table looks like this (my address redacted): The Windows 2003 server, "prime," is multihomed. Its network adapters are at 192.168.1.122, (as seen above), 10.0.0.1, and 10.0.0.2. I added the Routing and Remote Access role to it, and enabled LAN routing. I do not have it using RIP or other routing protocols. Its current routing table is shown below. To me, it looks like all of the right routes are there for traffic to pass between the 192.168.1.0 network and the 10.0.0.0 network. However, traffic does not pass. The 10.0.0.11 and .12 clients cannot be contacted from the 192.168.1.0 network. When I use traceroute to try to get to them, the trace gets to the Windows 2003 server's 192.168.1.122 address, then produces nothing but "* * *" timeouts. When I try to traceroute to 192.168.1.1 from a 10.0.0.0-network client, I get "destination host unreachable." However, I know that the routing is working at least a little, because from the 192.168.1.0 network, I can connect to the Windows server just fine by referring to it as 10.0.0.1. What static routes would allow me to contact 10.0.0.11 and .12 from the 192.168.1.0 network? Is it possible to tell the Windows server "since you are a DHCP/DNS server, you already know routes to get to machines that are getting IP addresses from you, please add those to your routing table" ? Will using RIP or OSPF on the Windows server actually be helpful in this situation?

    Read the article

  • libpam-ldapd not looking for secondary groups

    - by Jorge Suárez de Lis
    I'm migrating from libpam-ldap to libpam-ldapd. I'm having some trouble gathering the secondary groups from LDAP. On libpam-ldap, I had this on the /etc/ldap.conf file: nss_schema rfc2307bis nss_base_passwd ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es nss_base_shadow ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es nss_base_group ou=Groups,ou=CITIUS,dc=inv,dc=usc,dc=es nss_map_attribute uniqueMember member The mapping is there because I'm using groupOfNames instead of groupOfUniqueNames LDAP class for groups, so the attribute naming the members is named member instead of uniqueMember. Now, I want to do the same using libpam-ldapd but I can't get it to work. Here's the relevant part of my /etc/nslcd.conf: base passwd ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es base shadow ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es base group ou=Groups,ou=CITIUS,dc=inv,dc=usc,dc=es map group uniqueMember member And this is the debug output from nslcd, when a user is authenticated: nslcd: [8b4567] DEBUG: connection from pid=12090 uid=0 gid=0 nslcd: [8b4567] DEBUG: nslcd_passwd_byuid(4004) nslcd: [8b4567] DEBUG: myldap_search(base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(&(objectClass=posixAccount)(uidNumber=4004))") nslcd: [8b4567] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [8b4567] DEBUG: ldap_set_rebind_proc() nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [8b4567] DEBUG: ldap_simple_bind_s("uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [8b4567] connected to LDAP server ldap://172.16.54.31/ nslcd: [8b4567] DEBUG: ldap_result(): end of results nslcd: [7b23c6] DEBUG: connection from pid=15906 uid=0 gid=2000 nslcd: [7b23c6] DEBUG: nslcd_pam_authc("jorge.suarez","","su","***") nslcd: [7b23c6] DEBUG: myldap_search(base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(&(objectClass=posixAccount)(uid=jorge.suarez))") nslcd: [7b23c6] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [7b23c6] DEBUG: ldap_set_rebind_proc() nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_simple_bind_s("uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [7b23c6] connected to LDAP server ldap://172.16.54.31/ nslcd: [7b23c6] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [7b23c6] DEBUG: ldap_set_rebind_proc() nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_simple_bind_s("uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [7b23c6] connected to LDAP server ldap://172.16.54.31/ nslcd: [7b23c6] DEBUG: myldap_search(base="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(objectClass=posixAccount)") nslcd: [7b23c6] DEBUG: ldap_unbind() nslcd: [3c9869] DEBUG: connection from pid=15906 uid=0 gid=2000 nslcd: [3c9869] DEBUG: nslcd_pam_sess_o("jorge.suarez","uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es","su","/dev/pts/7","","jorge.suarez") It seems to me that it won't even try to look for groups. What I am doing wrong? I can't see anything relevant to my problem information on the docs. I'm probably not understanding how the map option works.

    Read the article

  • Are FC and SAS DAS devices standard enough?

    - by user222182
    Before I ask my questions, here is some background info that may or may not be useful: For the first time I find myself needing a DAS solution. My priority is data through-put in a single direction. I can write large blocks, and I don't need to read at the same time. The server (the data producing device) is not really a typical server, its a very powerful single board computer. As such I have limited options when it comes to the add-in cards I can install since it must use the fairly uncommon interface, XMC. Currently I believe I am limited PCIex8 gen 1 which means that the likely bottle neck for me will be this 16gbps connection. XMC Boards I have found so far offer the following connections: a) Dual 10GBE ethernet controller, total throughput 20gbps b) Dual Quad SAS 2.0 Connectors (SFF-8XXX) HBA (no raid), total throughput 48 gbps c) Dual FC 8gb HBA (no raid), total throughput 16gbps My questions for you guys are: 1) Are SAS and/or FC, and by extension their HBAs, standard enough that I could purchase a Dell or Aberdeen storage server with a raid controller that has external SAS or FC ports and expect that I can connect it to my SAS or FC HBA, be presented with a single volume (if I so configured the storage server), all without having to check for HBA compatibility? 2) On a device like a Dell PowerVault (either DAS or NAS) is there an OS on it to concern myself with, or is it meant to be remotely managed? Is there a local interface in case I cant remotely manage it (i.e. if my single board computer uses an OS not supported by Dell OpenManage). Would this be true of nearly any device which calls itself a DAS? 3) If I purchase some sort of Supermicro storage chassis, installed a raid controller with external connections, is there a nice lightweight OS I can run just for management of the controller? Would I even need an OS since the raid card would be configured pre-boot anyway? 4) It is much easier to buy XMC based 10gigabit ethernet cards (generally dual port). In what ways would I be getting into trouble by using iSCSI as a DAS are direct cabling with SFP+ cables? Thanks in advance

    Read the article

  • How to create an EFI System Partition?

    - by Alex Popov
    TL; DR How do I create an EFI system partition from scratch? How do I put the EFI firmware on it onces it is created? Long version I hava Toshiba T430 laptop. I received it with Windows 7 installed (but I think originally it has shipped with Windows 8). I installed Ubuntu on it, but deleted some partitions on the disk so that I ended up wiping out the Windows and only having Ubuntu. Among the deleted partitions was the EFI System partition. I discovered that Ubuntu now boots in Legacy mode (and not UEFI). I am trying to follow this guide on converting my Ubuntu installation from Legacy to UEFI. The problem - since there is no EFI partition whenever I choose from BIOS to boot using UEFI I cannot boot. That counts not only for the harddrive, but usb and DVD as well. I think this is logical - it expects an EFI partition and since it can't find it, it cannot continue booting futher, be it from HDD or DVD. So how do I recreate the EFI partition? The guide above says: Creating an EFI partition If you are manually partitioning your disk in the Ubuntu installer, you need to make sure you have an EFI partition set up. If your disk already contains an EFI partition (eg if your computer had Windows8 preinstalled), it can be used for Ubuntu too. Do not format it. It is strongly recommended to have only 1 EFI partition per disk. An EFI partition can be created via a recent version of GParted (the Gparted version included in the 12.04 disk is OK), and must have the following attributes: Mount point: /boot/efi (remark: no need to set this mount point when using the manual partitioning, the Ubuntu installer will detect it automatically) Size: minimum 100Mib. 200MiB recommended. Type: FAT32 Other: needs a "boot" flag. I had some trouble creating this partition: I boot from a live Ubuntu DVD, open GParted, create a 200MB partition and format it to FAT32. In GParted I cannot set the mount point and thus cannot set the bootflag. I didn't set the mount point in /etc/fstab since it's a live CD and fstab looked quite differently from what I expected compared to a normal boot. Anyway, I just didn't know what values to set. I booted again via the live DVD and then chose to install Ubuntu. I then created a partition with the mentioned criteria - mount point, 200MB, FAT32, boot flag. However, I continue to have this problem and I suppose it's because on that partition there is no EFI firmware, it's just an empty partition, which is suitable to have EFI firmware. So again, how do I create an EFI partition, which has the EFI software, so that the laptop can once again boot in UEFI mode?

    Read the article

  • Windows XP installation problems

    - by Samurai Waffle
    I recently asked a question on here, and thought I had it working... Here is a link to it. Windows XP Installation problems So basically I'm having trouble getting XP installed. To sum it up, a computer I have had a boot sector virus, and I used Darik's Nuke and Boot to wipe the hard drive clean. So the hard drive has nothing on it. I had to try and install Windows through a DOS prompt, because for some reason it won't read it off the DVD. The UBCD is able to look at the files located on the DVD I have in, but I can't boot from it for some reason. So I extracted it to a USB drive, booted to DOS and started the setup process. Here's the weird thing with DOS... It can only find the C: drive. The C: drive in DOS is the flash drive that I have in, running DOS. I can't find the hard drive anywhere! So anyways, after starting the setup process, it copied the files over to the "hard drive" (which took 16 hours because the version of DOS I ran couldn't run smartdrv.exe), and it said the computer had to reboot. So I let it reboot, and it stopped and said there is no boot device. So I popped in UBCD that I have installed on a flash drive, and I discovered that it had copied the Windows files over to the flash drive and not the hard drive. It never asked where it should extract the files... So I toyed around with UBCD, ran a memory test on the hard drive to make sure it was fine, and it came out clean. So I'm stuck now. How can I get this installed? Writing this, I came up with an idea. If I copy the DOS startup files over to the hard drive, would I be able to start DOS from it? If so, I believe that could fix my problems. Any help is greatly appreciated, because I am running out of ideas and am at my wits end with this computer.

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • Problems migrating an EBS backed instance over AWS Regions

    - by gshankar
    Note: I asked this question on the EC2 forums too but haven't received any love there. Hopefully the ServerFault community will be more awesome. The new AWS Sydney region opening up is something that we've been waiting for for a long time but I'm having a lot of trouble migrating our instances over from N. California. I managed to migrate 1 instance over using CloudyScripts to move a snapshot and then firing up a new instance in the Sydney region. This was a very new instance so both the source and destination were running on a Ubuntu 12.04 LTS server and I had no issues there. However, the rest of our instances are all Ubuntu 10.04 LTS and with these, I'm having a lot of problems. I've tried following: 1- following the AWS whitepaper on moving instances which was given to us at the recent Customer Appreciation Day in Sydney where the new region was launched. The problem with this approach was with the last step (Step 19) here you register the image: ec2-register -s snap-0f62ec3f -n "Wombat" -d "migrated Wombat" --region ap-southeast-2 -a x86_64 --kernel aki-937e2ed6 --block-device-mapping "/dev/sdk=ephemeral0" I keep getting this error: Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' does not exist which I think is due to the kernel_id not existing in the Sydney region? 2- Using CloudyScripts to move a snapshot and then creating a new volume and attaching to a new instance in Sydney This results in the instance just hanging on boot and failing the status checks. I can't SSH in or look at the server log I suspect that my issue is with finding the right kernel_id for the volume in the new region. However I can't seem to work out how to go about finding this kernel_id, the ones I've tried (from the original instance) don't result in the Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' error and any other kernel_id just won't boot. I've tried both 12.04 and 10.04 versions of Ubuntu. Nothing seems to work, I've been banging my head against a wall for a while now, please help! New (broken) instance i-a1acda9b ami-9b8611a1 aki-31990e0b Source instance i-08a6664e ami-b37e2ef6 aki-937e2ed6 p.s. I also tried following this guide on updating my Ubuntu LTS version to 12.04 before doing the migration but it didn't seem to work either, still getting stuck on updating the kernel_id http://ubuntu-smoser.blogspot.com.au/2010/04/upgrading-ebs-instance.html

    Read the article

  • Problem connecting to SSH in office network

    - by Jeune
    I have trouble connecting via SSH to a server whenever I am in the office. I get as far as being prompted for my password and then after that there's a long wait which always ends in a Write failed: Broken pipe This is only for connecting via SSH. I use svn to commit files to a repository hosted on the same server and there are no hitches. Furthermore, this only happens in our office. When I go the university or whenever I am at home or at the coffee shop I am able to connect seamlessly. There are no firewalls in our office. It's just a basic wireless router connected to a modem setup. It's the same setup I have at home and I guess the same setup in the coffee shop. What are the causes for a broken pipe and why does this phenomenon only happen when I try connect via SSH and not when I work with svn on the same server? Updated: Some debug logs after authentication: debug3: packet_send2: adding 48 (len 64 padlen 16 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env LIBGL_DRIVERS_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env DEFAULTS_PATH debug3: Ignored env SESSION_MANAGER debug3: Ignored env USERNAME debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env DESKTOP_SESSION debug3: Ignored env LIBGL_ALWAYS_INDIRECT debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug1: Sending env LANG = en_PH.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env GNOME_KEYRING_PID debug3: Ignored env MANDATORY_PATH debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env OLDPWD debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 UPDATE 2011-14-07: I am able to connect to the server via SSH now. I didn't do anything but that's because there is no one in the office but me! Having said that, is it possible that it has something to do with the number of sessions an SSH server can handle? UPDATE 2011-14-07: I try to login via SSH through Putty on another machine running windows together with my current SSH session in Ubuntu and now it seems my SSH session in Ubuntu has been dropped. I can't type into the terminal. Is Putty the culprit now?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >