Search Results

Search found 4864 results on 195 pages for 'resolv conf'.

Page 26/195 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Can't upload project to PPA using Quickly

    - by RobinJ
    I can't get Quickly to upload my project into my PPA. I've set up my PGP key and used it so sign the code of conduct, and the PPA exists. I don't know what other usefull information I can supply. robin@RobinJ:~/Ubuntu One/Python/gtkreddit$ quickly share --ppa robinj/gtkredditGet Launchpad Settings Launchpad connection is ok gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 138, in <module> license.licensing() File "/usr/share/quickly/templates/ubuntu-application/license.py", line 284, in licensing {'translatable': 'yes'}) File "/usr/share/quickly/templates/ubuntu-application/internal/quicklyutils.py", line 166, in change_xml_elem xml_tree.find(parent_node).insert(0, new_node) AttributeError: 'NoneType' object has no attribute 'insert' ERROR: share command failed Aborting I reported this as a bug on Launchpad, because I assume that it is a bug. If you know a quick workaround, please let me know. https://bugs.launchpad.net/ubuntu/+source/quickly/+bug/1018138

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • Missing X-Spam-Status header

    - by Walt Stoneburner
    I recently upgraded to Ubuntu 14.04.1 LTS (trusty) and have followed the directions in https://help.ubuntu.com/14.04/serverguide/mail-filtering.html and am sending and receiving mail just fine. While I do see X-Virus-Scanned headers in my messages, which suggests mail is indeed being processed, I do not see any X-Spam-Level or X-Spam-Score headers being added to messages. This makes downstream procmailrc and client-side filtering ...more difficult. While having $final_spam_destiny = D_DISCARD in /etc/amavis/conf.d/20-debian_defaults does greatly reduce spam to my inbox, I had concerns of false-positives prior to tuning and didn't know were there going, so have set it to D_PASS for the time being. This exposed the problem. I'm not sure where to look to start diagnosing the problem (otherwise I'd post a suspect configuration file). /etc/amavis/conf.d/15-content_filter_mode has the lines uncommented to enable virus and spam checks, and virus checking appears to be working according to the headers. Spam Assassin certainly seems to be starting just fine, too. SpamAssassin debug facilities: info SA info: zoom: able to use 360/360 'body_0' compiled rules (100%) SpamAssassin loaded plugins: AskDNS, AutoLearnThreshold, Bayes, BodyEval, Check, DKIM, DNSEval, FreeMail, HTMLEval, HTTPSMismatch, Hashcash, HeaderEval, ImageInfo, MIMEEval, MIMEHeader, Pyzor, Razor2, RelayEval, ReplaceTags, Rule2XSBody, SPF, SpamCop, URIDNSBL, URIDetail, URIEval, VBounce, WLBLEval, WhiteListSubject SpamControl: init_pre_fork on SpamAssassin done I've also set $log_level = 2; in /etc/amavis/conf.d/50-user and don't see any obvious errors rolling by in the logs. Q: Any recommendations of what to try next? UPDATE (it appears that I have the right setting already): /etc/amavis/conf.d$ grep sa_tag_level_deflt * 20-debian_defaults:# $sa_tag_level_deflt = 2.0; # add spam info headers if at, or above that level 20-debian_defaults:$sa_tag_level_deflt = -999; # add spam info headers if at, or above that level

    Read the article

  • Package Manager cannot access repositories but internet is working

    - by kazman
    I am currently at a conference in another country and my package manager cannot access repositories. My internet is working fine and I can ping the repositories or go to them in a browser, but package manager fails to access them. If I sudo apt-get update it throws Something wicked happened resolving 'wwwproxy:3128' (-5 - No address associated with hostname) (or Ign's). This proxy corresponds to my proxy at my office back at home, but I have disabled proxy in the package manager. Scanning for best repository doesn't work either, it doesn't manage to connect to any. I have searched for this online and have checked things about my apt.conf file. My apt.conf contains: Acquire::http::proxy "http://wwwproxy:3128/"; Acquire::https::proxy "https://wwwproxy:3128/"; Acquire::ftp::proxy "ftp://wwwproxy:3128/"; Acquire::socks::proxy "socks://wwwproxy:3128/"; If I remove apt.conf (or replace with blank), it makes no difference. I don't see that it should since I am connecting directly (and have set it so in my network options in Package manager network settings) I have also tried some things with resolv.conf (changing name address to primary and secondary dns) to no avail. (im not sure if this would help, following other advice) I am running 12.04. (I wrote this very quickly and wrote down everything I have tried to possibly shorten the troubleshooting process, have very limited time between lectures and need this sorted asap, my apologies)

    Read the article

  • About the use of dotted hostname with avahi

    - by BenZen
    Hi I recently discovered avahi. It help you when you when to resolve hostname for the local network. But in my situation i've got a issue. I decided to host a machine called "a.alpha" and a another called "b.aplha". In a near futur i will also use some machine called "a.beta" and "b.beta". My probleme is that from "a.alpha" i can resolv "a.alpha.local" hostname, but currently i can't resolv "a.aplha.local" from b.alpha. So when i will decide to use the ".beta" extension, i will have some issues. Is it normal that the machine "a.alpha" doesn't expos the entire hostname to mdns ? I know i can change the naming method (saying use a-alpha instead of a.alpha). But i like it this way. So the quesiton is: Is it possible to use dotted name in the /etc/hostname and to resolve it using avahi?

    Read the article

  • Internet Good but update manager not good

    - by Raja
    I am using Ubuntu 12.04 . I am doing Internet with a USB Modem of 236kbps . My issue is , If i am accessing webpage through browser its good but if i am doing sudo apt-get update through terminal , its not responding & its just hanging there .I am unable to get whats my issue . help me to solve this . one more thing , i am unable to change my dns also . raja@badfox:~$ cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 220.226.100.40 nameserver 220.226.6.104 nameserver 127.0.0.1 I wanna replace them to dns ip's linked here http://www.cyberciti.biz/faq/free-dns-server/ see the picture to have some idea about my issue .

    Read the article

  • Some websites are not opening in any web browser (firefox/chrome). What should I do? [closed]

    - by Jamal
    Some websites are not opening in my system. I am using Ubuntu 12.10. Earlier, when I was using Ubuntu 10.10,11.10 there was no such issue. The problem was started in ubuntu 12.04, but it remained the same even after formatting and installing ubuntu 12.10. I have tried Firefox as well as Chromium and I am sure the issue is not with the browser. same websites are opening perfectly with Windows. Google, Twitter and Ubuntu related websites are running perfect. Other websites like www.downrightnow.com, easy-mantra.com, facebook.com, sourceforge.net are not opening. Installed dual boot with win xp. Ubuntu 12.10 is 64 bit. Processor Intel core i3, RAM 4GB. $ cat /etc/resolv.conf <-- Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) -- DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN -- output --> nameserver 127.0.1.1

    Read the article

  • Some websites are not opening. What should I do? [closed]

    - by Jamal
    Some websites are not opening in my system. I am using Ubuntu 11.04. Earlier, when I was using Ubuntu 10.01, there was no such issue. I have tried Firefox as well as Chromium and I am sure the issue is not with the browser. same websites are opening perfectly with Windows. Google, Twitter and Ubuntu related websites are running perfect. Other websites like www.downrightnow.com, easy-mantra.com are not opening. Installed Wubi on windows7 (32 bit). Ubuntu 12.10 is 64 bit. Processor Intel core 2 Duo. $ cat /etc/resolv.conf <-- Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) -- DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN -- output --> nameserver 127.0.1.1

    Read the article

  • Upgraded to Ubuntu 13.10 - Apache not able to start

    - by 0R10N
    I updated to Ubuntu 13.10 (from Ubuntu 13.04) last weekend, and now Apache is not being able to start. It was working perfectly well until the upgraded, and I haven't changed anything myself. When I ran a restart this is what I get apache2: Syntax error on line 260 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/conf.d/: No such file or directory So, I created the directory, and then I get this: * Starting web server apache2 * * The apache2 configtest failed. Output of config test was: [Wed Oct 30 11:17:42.921934 2013] [proxy_html:notice] [pid 2496] AH01425: I18n support in mod_proxy_html requires mod_xml2enc. Without it, non-ASCII characters in proxied pages are likely to display incorrectly. AH00526: Syntax error on line 84 of /etc/apache2/apache2.conf: Invalid command 'LockFile', perhaps misspelled or defined by a module not included in the server configuration Action 'configtest' failed. The Apache error log may have more information. Thanks!

    Read the article

  • Ubuntu 14.04:LTS , HPLIP loses USB connection to HP laserjet

    - by Gareth
    This is my first post, so please let me know if i have inadvertanly broken any rules. Problem There seems to be a problem with HPLIP and USB connections in ubuntu 14.04LTS. After upgrading i managed to get the printing to work but today it has broken. Initial Issue (Solved) After upgrading to unbutntu 14.04 LTS my printer lHP LaserJet 1018 stopped printing (code=12) Looking through the Forumsthere are several issues with printitng and HPLIP so I was able to troubleshoot this. The steps I took were : Reran HPdoctor Ran hp-check Un-installed and installed the latest version of HPLIP (3.14.4) Checked the USB connections lsusb and lsusb-v Re-ran hpcheck Removed the printer from HPLIP Re-ran hpcheck Manually configued HPLIP to the printer hp-setup-g <xxx:yyy> And this worked HPLIP was able to see the printer in the USB , test page printed and was happily working for a few weeks. Current Issue Printer Not working However today my wife complains the printer is not working and checking see that although HPLIP has the same error code and did not seem to be able to see the printer although running lsusb could see the printer. Initially thought this may be due to usb given a new bus/device after being turned on and off and went to repeat the steps above at the moment still seeing an error in that the HPLIP is complaining that it cannot see the device **error: Device not found. Please make sure your printer is properly connected and powered-on.** current Observations lsusb output ## Bus 002 Device 007: ID 03f0:4117 Hewlett-Packard LaserJet 1018 sudo hp-check output *> "duan@duan-Lenovo-B550:~$ sudo hp-check [sudo] password for duan: Saving output in log file: /home/duan/hp-check.log HP Linux Imaging and Printing System (ver. 3.14.4) Dependency/Version Check Utility ver. 15.1 Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to distribute it under certain conditions. See COPYING file for more details. Note: hp-check can be run in three modes: 1. Compile-time check mode (-c or --compile): Use this mode before compiling the HPLIP supplied tarball (.tar.gz or .run) to determine if the proper dependencies are installed to successfully compile HPLIP. Run-time check mode (-r or --run): Use this mode to determine if a distro supplied package (.deb, .rpm, etc) or an already built HPLIP supplied tarball has the proper dependencies installed to successfully run. Both compile- and run-time check mode (-b or --both) (Default): This mode will check both of the above cases (both compile- and run-time dependencies). Full Output output of hp-setup -g 002:007 window box "device not found please make sure your printer is properly connected and powered on" duan@duan-Lenovo-B550:~$ sudo hp-setup -g 002:007 [sudo] password for duan: > HP Linux Imaging and Printing System (ver. 3.14.4) Printer/Fax Setup > Utility ver. 9.0 > > Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This > software comes with ABSOLUTELY NO WARRANTY. This is free software, and > you are welcome to distribute it under certain conditions. See COPYING > file for more details. > > hp-setup[18461]: debug: param=002:007 hp-setup[18461]: debug: > selected_device_name=None Fontconfig error: > "/etc/fonts/conf.d/65-khmer.conf", line 14: out of memory Fontconfig > error: "/etc/fonts/conf.d/65-khmer.conf", line 23: out of memory > Fontconfig error: "/etc/fonts/conf.d/65-khmer.conf", line 32: out of > memory hp-setup[18461]: debug: Sys.argv=['/usr/bin/hp-setup', '-g', > '002:007'] printer_name=None param=002:007 jd_port=1 device_uri=None > remove=False Searching for device... hp-setup[18461]: debug: Trying > USB with bus=002 dev=007... hp-setup[18461]: debug: Not found. > hp-setup[18461]: debug: Trying serial number 002:007 hp-setup[18461]: > debug: Probing bus: usb hp-setup[18461]: debug: Probing bus: par > error: Device not found. Please make sure your printer is properly > connected and powered-on. hp-setup[18461]: debug: Starting GUI loop. .. USB lead Works with the Windows 7 laptop Printer Works with windows 7 laptop Questions Is this a Bug with HPLIP or an issue with laptop/printer? Supplementary question if it is a bug what information is needed and where should it be sent ? Any suggestions on how to get the printer to work correctly with Ubuntu 14.04LTS/HPLIP 13.4.3 so that it stays working ?

    Read the article

  • Screen brightness control not working on Lenovo T530

    - by Matt
    My brightness control doesn't work with a fresh install of 12.10 (brand new laptop). It is set to the brightest setting when I boot up and when I try to change it, I see the notification bar come up but the brightness doesn't actually change. I've tried all the solutions I could find around the Internet but none of them work. Things I have tried include: Editing /sys/class/backlight/acpi_video0/brightness In /usr/share/X11/xorg.conf.d/10-brightness-control.conf: Option "RegistryDwords" "EnableBrightnessControl=1" In /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=Linux acpi_backlight=vendor" There is no xorg.conf file in 12.10 that I have found, so the solutions that suggest editing that file don't do me a whole lot of good. I am currently using the Nouveau driver, but switching to the Nvidia proprietary drivers made no difference. Any other ideas? When is this bug going to be fixed? With all the reports I've come across I would think it would get a lot of attention. Thanks.

    Read the article

  • what packages should I install in ubuntu 12.04 to fulfill opengl requirements for using nouveau driver?

    - by karolszk
    I try to switch from nvidia to nouveau driver via script: !/bin/bash stop gdm rmmod nvidia sed -i "s/nouveau/nvidia/" /etc/modprobe.d/blacklist-nvidia-nouveau.conf update-alternatives --set gl_conf /usr/lib/mesa/ld.so.conf ldconfig modprobe nouveau cp /etc/X11/xorg.conf{.nouveau,} start gdm and driver is loaded and X started but compiz it doesn't. In .xsession-errors I see: Compiz (opengl) - Fatal: Root visual is not a GL visual compiz (opengl) - Error: initScreen failed compiz (core) - Error: Couldn't activate plugin 'opengl' Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual gnome-session[19075]: WARNING: App 'compiz.desktop' respawning too quickly gnome-session[19075]: WARNING: Application 'compiz.desktop' killed by signal gnome-session[19075]: WARNING: App 'compiz.desktop' respawning too quickly what I'm doing wrong??

    Read the article

  • Why doesn't apache2 respect my envvars file?

    - by Avery Chan
    My envvar files has these lines in it: export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data My apache2.conf has these lines in it: # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} But when I run apache2 -M I get this: apache2: bad user name ${APACHE_RUN_USER} A temporary fix is to hard-code www-data into it my apache2.conf file. There was some speculation here that this was because some configuration script didn't replace the env vars correctly in my apache2.conf file. Regardless how do I get apache2 to consult my envvars file? As another data point this site seems to indicate the envvars is generated at build, but read by apache2ctl at runtime, suggesting that this file isn't just poop leftover by the build process.

    Read the article

  • MacOSX VirtualHost: "You don't have permission to access / on this server" error

    - by David Casillas
    The Apache instalation of MacOSX is running Ok. I have tried to create a VirtualHost called test.local, but as soon as I uncomment from /private/etc/apache2/httpd.conf the line Include /private/etc/apache2/extra/httpd-vhosts.conf , and try to access test.local virtualhost I get an error "You don't have permission to access / on this server". The VirtualHost configuration in /private/etc/apache2/extra/httpd-vhosts.conf is: <VirtualHost *:80> ServerName test.local DocumentRoot "/Users/username/Sites/Test/public" <Directory "/Users/username/Sites/Test/public"> Options Indexes FollowSymLinks Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have also include the VirtualHost in hosts file: 127.0.0.1 test.local

    Read the article

  • Virtual Host under MacOSX not working

    - by David Casillas
    I have setup a virtualhost for MacOSX Apache instalation. This are my steps: edit /private/etc/apache2/httpd.conf removing comment from: Include /private/etc/apache2/extra/httpd-vhosts.conf edit /private/etc/apache2/extra/httpd-vhosts.conf, added: <VirtualHost *:80> ServerName test.local DocumentRoot "/Users/myusername/Sites/Test/public" <Directory "/Users/myusername/Sites/Test/public"> Options Indexes FollowSymLinks Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> edit /private/etc/hosts added 127.0.0.7 test.local Restart Apache But the VirtualHost does not work. To further isolate the problem I check the same configuration with MAMP and the virtual host worked rigth, so the configuration files should be fine. What can be wrong?

    Read the article

  • Installing old Loki games on 12.04 64-bit results in no audio

    - by FlabbergastedPickle
    All, Here's an interesting problem. I followed instructions provided online for installing Loki Games' Heroes of Might and Magic 3 (see http://www.swanson.ukfsn.org/loki/ and http://wtanaka.com/node/7641) and got it installed and patched to the latest version. However, every time I start it regardless whether the pulseaudio is running, I get the following error: LD_LIBRARY_PATH=/usr/local/lib/Loki_Compat/ /usr/local/lib/Loki_Compat/ld-linux.so.2 /usr/local/games/Heroes3/heroes3.dynamic ALSA lib conf.c:3314:(snd_config_hooks_call) Cannot open shared library libasound_module_conf_pulse.so ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM default Couldn't open audio: My first soundcard is HDMI output and my second one is the actual soundcard (HP DM1 running 12.04 64-bit with latest updates). I did set up /etc/asound.conf as follows: asound.conf pcm.!default { type hw card 1 } ctl.!default { type hw card 1 } So, the default soundcard should work ok. Between Shadowgrounds that also stopped working and this it appears a there may be some unfinished business/regressions in 32-bit support on 64-bit systems in 12.04. Any thoughts?

    Read the article

  • Sound is not working correctly on Ubuntu 12.04

    - by Jeggy
    I know this is my own fault. But what i did was this first i wrote this command 'sudo apt-get remove pulseaudio' and then i wrote again 'sudo apt-get install pulseaudio' and now the sound doesn't work properly And the Indicator doesn't work either, it's just grayed out. The shortcuts are not working either. Alsamixer is working, and this is the only way i change change the volume at the moment: jeggy@jeggy-XPS:~$ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xf1c00000 irq 52 jeggy@jeggy-XPS:~$ aplay -l **** List of PLAYBACK Hardware Devices **** ALSA lib conf.c:1686:(snd_config_load1) _toplevel_:11:0:Unexpected end of file ALSA lib conf.c:3406:(config_file_open) /etc/asound.conf may be old or corrupted: consider to remove or fix it /usr/bin/pulseaudio: error while loading shared libraries: libpulsecommon-1.1.so: cannot open shared object file: No such file or directory card 0: PCH [HDA Intel PCH], device 0: ALC665 Analog [ALC665 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 1: ALC665 Digital [ALC665 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 VLC sound is not working, am getting this error:

    Read the article

  • how to uninstall sphinx server

    - by misterjinx
    I installed (from sources) yesterday sphinx server and didn't think of configuring it properly, so I left the default installation directory (/usr/local). Today I realized that it created all of it's directories inside that directory, instead of creating its own (as I wrongly thought). So today I started again the installation and specified explicitly the location where to install. After that I removed the previous directories from /usr/local that were created when the first installation was done (bin, etc, var). So I thought of giving it a test. I created the .conf file and wanted to run the indexer. But now, the indexer always tries to search the .conf file in the old location (obviously does not find anything) and when I specify where to find the .conf file it gives me errors for each of the configuration settings present. I know I did something wrong when I deleted manually those files and that's why I'm asking you if have any ideas how to correctly uninstall sphinx. Thank you.

    Read the article

  • SSL issue and redirects from https to http

    - by Asghar
    I have a site www.example.com for which i purchased SSL cert and installed. And it was working fine, I also have a subdomain with app.example.com which was not on SSL. Both www.example.com and app.example.com are on same IP address. At later we decided to put SSL only on app.frostbox.com and then i configured SSL with app.frostbox.com and it worked fine, Now the issue is that Google is indexing my site as https://www.example.com/ and when users hits the web , Invalid security warning is issued and when user allow security issue they are shown my app.example.com contents. Note: I have my SSL configuration files in /etc/httpd/conf.d/ssl.conf The contents of the ssl.conf are below. http://pastebin.com/GCWhpQJq NOTE: I tried solutions in .httaccess but none of those worked. Like redirecting 301 redirects etc

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • Software Center won't take into account my changed OpenID name: any idea?

    - by Pascal Av
    Failing to install IntelliJ from the Software Center, I realized my login is wrong in the /etc/apt/auth.conf entry that the install process generates. In this file, I see my original OpenID, that's the one which got automatically generated when signing up on Launchpad. It contains my last name so I changed it. I purged conf and binaries for Ubuntu One, reinstalled, deleted all listed "Devices" from app, all "Applications" from Launchpad. Deleted ~/.cache/software-center/, reboot, but still: When installing IntelliJ, the auth.conf file receives my original OpenID, not the modified one Problem is that the commercial subscription, for IntelliJ private PPA, uses my modified OpenID, so authentication attempt fails. I can't remove nor modify this subscription, even by changing back my OpenID into Launchpad. Any idea to solve this?

    Read the article

  • How can I run everything as root

    - by Hermione
    I have dual booted to lubuntu (with Windows XP) and everytime and then I'm getting asked for my password. How do I run everything as root and not ask a password again? Ideally I wanted to run nginx but it has permission denied issues: apathetic@ubuntu:~$ service nginx start Starting nginx: nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2012/08/03 20:06:25 [warn] 4762#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 nginx: the configuration file /etc/nginx/nginx.conf syntax is ok 2012/08/03 20:06:25 [emerg] 4762#0: open() "/var/run/nginx.pid" failed (13: Permission denied) nginx: configuration file /etc/nginx/nginx.conf test failed

    Read the article

  • Redirect traffic from 127.0.0.1 to 127.0.0.1 on port 53 to port 5300 with iptables

    - by Zagorax
    I'm running a local dns server on port 5300 to develop a software. I need my machine to use that dns but I wasn't able to tell /etc/resolv.conf to check on a different port. I searched a bit on google and I didn't find a solution. I set 127.0.0.1 as nameserver on /etc/resolv.conf. This is my whole /etc/resolv.conf: nameserver 127.0.0.1 Could you please tell me how can I redirect outbound traffic on port 53 to another port? I tried the following but it didn't work: iptables -t nat -A PREROUTING -p tcp --dport 53 -j DNAT --to 127.0.0.1:5300 iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to 127.0.0.1:5300 Here is the output of iptables -t nat -L -v -n (with suggested rules): Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 redir ports 5300 0 0 REDIRECT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 redir ports 5300 Chain POSTROUTING (policy ACCEPT 302 packets, 19213 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 302 packets, 19213 bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >