Search Results

Search found 6200 results on 248 pages for 'lib'.

Page 122/248 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • rpm file conflict after alien conversion

    - by Zitrax
    I have a program for which I generate a .deb file. The .deb file works fine on the systems I have tried it on (also tested with lintian). Previously it has worked to use alien to convert this to .rpm and install it on Suse. However it is now about a year since I tried it the last time and now I get an error when trying to install the alien made rpm on Fedora 11, I get this error: file /usr/share/icons/default.kde from install of testpkg-0.2-2.i386 conflicts with file from package kdelibs3-3.5.10-13.fc11.1.i586 Listing the content of the rpm file: $ rpm -qlp testpkg-0.2-2.i386.rpm / /usr /usr/games /usr/games/testpkg /usr/lib /usr/lib/libfmod-3.75.so /usr/share /usr/share/app-install /usr/share/app-install/icons /usr/share/app-install/icons/testpkg.png /usr/share/applications /usr/share/applications/testpkg.desktop /usr/share/doc /usr/share/doc/testpkg /usr/share/doc/testpkg/changelog.gz /usr/share/doc/testpkg/copyright /usr/share/games /usr/share/games/testpkg /usr/share/games/testpkg/images /usr/share/games/testpkg/images/bb.dat /usr/share/games/testpkg/images/bb_bg.dat /usr/share/games/testpkg/images/bubblemad_8x8.png /usr/share/games/testpkg/images/goldfont.png /usr/share/games/testpkg/lvl /usr/share/games/testpkg/lvl/lvl001.txt /usr/share/games/testpkg/lvl/lvl002.txt /usr/share/games/testpkg/lvl/lvl003.txt /usr/share/games/testpkg/lvl/lvl004.txt /usr/share/games/testpkg/lvl/lvl005.txt /usr/share/games/testpkg/lvl/lvl006.txt /usr/share/games/testpkg/lvl/lvl007.txt /usr/share/games/testpkg/music /usr/share/games/testpkg/music/alfa.it /usr/share/games/testpkg/music/beta.it /usr/share/games/testpkg/sounds /usr/share/games/testpkg/sounds/bounce.wav /usr/share/games/testpkg/sounds/click.wav /usr/share/games/testpkg/sounds/warning.wav /usr/share/icons /usr/share/icons/default.kde /usr/share/icons/default.kde/16x16 /usr/share/icons/default.kde/16x16/apps /usr/share/icons/default.kde/16x16/apps/testpkg.png /usr/share/man /usr/share/man/man6 /usr/share/man/man6/testpkg.6.gz Am I wrong in putting the kde icons in /usr/share/icons/default.kde which seem to be a symbolic link ? It's a symbolic link on both Kubuntu 9.10 and Fedora 11 though. Sounds like a common situation that the same directory is needed for different packages, so why is it a conflict ?

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI

    - by Arda Xi
    I have a hosting account with DreamHost and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI. I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • fsck on LVM snapshots

    - by Alpha01
    I'm trying to do some file system checks using LVM snapshots of our Logical Volumes to see if any of them have dirty file systems. The problem that I have is that our LVM only has one Volume Group with no available space. I was able to do fsck's on some of the logical volumes using a loopback file system. However my question is, is it possible to create a 200GB loopback file system, and saved it on the same partition/logical volume that I'll be taking a snapshot of? Is LVM smart enough to not take a snapshot copy of the actual snapshot? [root@server z]# vgdisplay --- Volume group --- VG Name Web2-Vol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 29 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 6 Max PV 0 Cur PV 1 Act PV 1 VG Size 544.73 GB PE Size 4.00 MB Total PE 139450 Alloc PE / Size 139450 / 544.73 GB Free PE / Size 0 / 0 VG UUID BrVwNz-h1IO-ZETA-MeIf-1yq7-fHpn-fwMTcV [root@server z]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.7G 3.6G 5.6G 40% / /dev/sda1 251M 29M 210M 12% /boot /dev/mapper/Web2--Vol-var 12G 1.1G 11G 10% /var /dev/mapper/Web2--Vol-var--spool 12G 184M 12G 2% /var/spool /dev/mapper/Web2--Vol-var--lib--mysql 30G 15G 14G 52% /var/lib/mysql /dev/mapper/Web2--Vol-usr 13G 3.3G 8.9G 27% /usr /dev/mapper/Web2--Vol-z 468G 197G 267G 43% /z /dev/mapper/Web2--Vol-tmp 3.0G 76M 2.8G 3% /tmp tmpfs 7.9G 92K 7.9G 1% /dev/shm The logical volume in question is /dev/mapper/Web2--Vol-z. I'm afraid if I created the loopback file system in /dev/mapper/Web2--Vol-z and take a snapshot of it, the disk size will be trippled in size, thus running out of disk space available.

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • [Zend Framework - Ubuntu10.04- Lamp- First Project] i get 500 error on http://localhost/zftutorial/p

    - by meyosef
    Hi I new in Zend and Lamp, my software: Zend Framework, Ubuntu10.04,Lamp. I made my first Zend Project with Zend tool (according this tutorial http://akrabat.com/wp-content/uploads/Getting-Started-with-Zend-Framework.pdf) But when i go to http://localhost/zftutorial/public i get 500 error. My $ dir -l of zftutorial: drwxr-xr-x 6 root root 4096 2010-06-01 23:54 application drwxr-xr-x 2 root root 4096 2010-06-01 23:54 docs drwxr-xr-x 3 root root 4096 2010-06-02 00:23 library drwxr-xr-x 3 root root 4096 2010-06-02 00:00 nbproject drwxr-xr-x 2 root root 4096 2010-06-01 23:54 public drwxr-xr-x 4 root root 4096 2010-06-01 23:54 tests my:/etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Thanks,

    Read the article

  • mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect

    - by photon
    When I'm trying to install MySQL 5.5 community edition on my Ubuntu 10.04 by compiling the source code, I met the following problem: $ fg % 1 sudo ../bin/mysqld_safe --basedir=/usr/local/mysql_community_5.5/data --user=mysql --defaults-file=/etc/my.cnf [sudo] password for linnan: Sorry, try again. [sudo] password for linnan: 121023 09:26:21 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:21 mysqld_safe Logging to '/var/log/mysql/error.log'. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121023 09:26:23 mysqld_safe mysqld from pid file /var/lib/mysql/ubuntu.pid ended It seems that the problem is related to log configuration. I've noticed a bugfix related to this problem: http://bugs.mysql.com/bug.php?id=50083 But I still have no idea how to solve it. The relative content in /etc/my.cnf: [mysqld] port = 3306 socket = /tmp/mysql.sock skip-external-locking key_buffer_size = 384M max_allowed_packet = 1M table_open_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 character-set-server=utf8 [mysqld-safe] basedir=/usr/local/mysql_community_5.5 datadir=/usr/local/mysql_community_5.5/data mysqld_safe_syslog.cnf: /etc/mysql/conf.d/mysqld_safe_syslog.cnf: [mysqld_safe] syslog

    Read the article

  • Hudson Mercurial checkout throws exception on Debian

    - by Jack
    I'm trying to configure Hudson to checkout my site's sources from Mercurial but it throws an exception. The /var/lib/hudson/jobs/jobname directory does exist, and I can create a workspace directory in there (even after su hudson), but as soon as I run the Hudson job again this directory disappears and the job ends with the same error: java.io.IOException: Cannot run program "hg" (in directory "/var/lib/hudson/jobs/jobname/workspace"): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.<init>(Proc.java:192) at hudson.Proc$LocalProc.<init>(Proc.java:164) at hudson.Launcher$LocalLauncher.launch(Launcher.java:639) at hudson.Launcher$ProcStarter.start(Launcher.java:274) at hudson.Launcher$ProcStarter.join(Launcher.java:281) at hudson.plugins.mercurial.MercurialSCM.joinWithPossibleTimeout(MercurialSCM.java:298) at hudson.plugins.mercurial.HgExe.popen(HgExe.java:191) at hudson.plugins.mercurial.HgExe.tip(HgExe.java:171) at hudson.plugins.mercurial.MercurialSCM.calcRevisionsFromBuild(MercurialSCM.java:254) at hudson.scm.SCM._calcRevisionsFromBuild(SCM.java:304) at hudson.model.AbstractProject.calcPollingBaseline(AbstractProject.java:1183) at hudson.model.AbstractProject.checkout(AbstractProject.java:1172) at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:499) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:415) at hudson.model.Run.run(Run.java:1362) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) Running on Debian 6.0.1 I wonder if anyone has ran into this before, and hopefully solved it?

    Read the article

  • "chown mysql:mysql /data/tmp" command

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. If there is a MySQL installation on a Ubuntu machine, I saw some people doing the following thing: sudo chown mysql:mysql /data/tmp I get confused, I know the meaning of the above command, which is to change the owner of /data/tmp to user 'mysql' and change the group of it to 'mysql' group. But (my questions): 1. Why would one run the above command? If I create a table in my_db database, by default, there will be .frm, .MYD, and .MYI files (data files) be created automatically by MySQL under /var/lib/mysql/my_db/ . So, does the above command changes the default MySQL data directory to /data/tmp/ instead of /var/lib/mysql/my_db/? Basically, I would like to know the purpose and effect of the above command. (better with examples) 2. Where does the 'mysql' owner and group come from? Does the installation of MySQL on a Linux machine automatically create the 'mysql' user and group? or People need to manually create a mysql account for the linux machine?

    Read the article

  • Apache2: not defined domains directing to the same virtual host

    - by rafaame
    I have Apache2 configured in a debian box with virtual hosts. I have several domains pointing to the box's IP address. The domains whose virtual hosts are configured works perfectly. But if I type in the browser a domain that is pointing to the box but whose the virtual host is not configured, I get to a random virtual host of another domain in the box. Not a random, but one of the virtual hosts (always the same) but I dunno why it is it. The correct would be that the domains that are not configured as virtual hosts return a hostname error or something, right? Does someone know how to fix the problem? One of my virtual hosts config file: <VirtualHost *:80> ServerAdmin [email protected] ServerName dl.domain.com DocumentRoot /var/www/dl.domain.com/public_html/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/dl.domain.com/public_html/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My apache2.conf http://www.speedyshare.com/files/29107024/apache2.conf Thanks for the help

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • Newbie: get access privilege

    - by Mellon
    I am newbie on Linux Ubuntu machine. I logged in to the Ubuntu with username: student. There are some directories only allowed root user to access, for example /var/lib/mysql ,(I know I can use sudo to access but it is not what I want). If I want to get the access privilege on those directories with student account, is it so that I can run the following command : chown student: PATH_TO_ROOT_USER_PRIVILEGED_DIR and after that, I can access that directory by using my own account ? am I right? If I am right, then will root user lose the access privilege because I changed it to student user? If I am wrong, please tell me the right solution. P.S. please don't concern on what I am going to do on /var/lib/mysql directory, that is only my example, as I mentioned above, I mean generally *for those directories which only have root privilege*, can I use chown to change access privilege and will root user then loose the access because of the change made by chown ? I just wanna know the effect of chown.

    Read the article

  • Birt on Tomcat unable to find JARs

    - by LostInTheWoods
    First, my setup: BiRT Runtime: 3.7.2. Ubuntu 10.04 Tomcat 6 Sun Java 1.6.0 I have a jar file I want to deploy onto the Tomcat server so it is usable by the runtime, so I placed the jar file in /var/lib/tomcat6/webapps/birt/WEB-INF/lib. As I understand it this is the default location for JAR files that are going to be used by a BiRT report. But the jar file is not accessible by the report that is trying to call it. In the BiRT logs I see: Error evaluating Javascript expression. Script engine error: ReferenceError: "DynDSinfo" is not defined. (/report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"]#20) Script source: /report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"], line: 0, text: __bm_beforeOpen() org.eclipse.birt.data.engine.core.DataException: Fail to execute script in function __bm_beforeOpen(). Source: "DynDSinfo" is the class I am trying to reference.. and now for the kicker... this works fine on Tomcat6 on Windows 7. The same files in the same places. So is there some additional configuration or some environmental variable that needs to be set, or something different on the Linux (Ubuntu) platform? All help or ideas gratefully received, Stephen

    Read the article

  • nginx does not use variables set in /etc/environment on system reboot, but does when restarted from shell

    - by Dave Nolan
    I have a Rails app running on nginx/passenger. It restarts happily in a shell using sudo /etc/init.d/nginx stop|start|restart. But Passenger throws an error when the system is rebooted: "Missing the Rails #{version} gem". But GEM_HOME and GEM_PATH are both set in /etc/environment so surely they would be available to all processes during reboot? /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/games" GEM_HOME=/var/lib/gems/1.8 GEM_PATH=/var/lib/gems/1.8 /etc/init.d/nginx #! /bin/sh ### BEGIN INIT INFO # Provides: nginx # Required-Start: $all # Required-Stop: $all # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the nginx web server # Description: starts nginx using start-stop-daemon ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/opt/nginx/sbin/nginx NAME=nginx DESC=nginx test -x $DAEMON || exit 0 # Include nginx defaults if available if [ -f /etc/default/nginx ] ; then . /etc/default/nginx fi set -e case "$1" in start) echo -n "Starting $DESC: " start-stop-daemon --start --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; stop) echo -n "Stopping $DESC: " start-stop-daemon --stop --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; restart|force-reload) echo -n "Restarting $DESC: " start-stop-daemon --stop --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON sleep 1 start-stop-daemon --start --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; reload) echo -n "Reloading $DESC configuration: " start-stop-daemon --stop --signal HUP --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart|force-reload}" >&2 exit 1 ;; esac exit 0 $ opt/nginx/sbin/nginx -v nginx version: nginx/0.7.67 Ubuntu lucid

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • Hiera can't find puppet environment

    - by quickshiftin
    I'm testing out hiera and hitting a snag on the hierarchy configuration. What I have is extremely simple, the part that isn't working is specification of hiera datadir files based on environment. Here's the config file (/etc/hiera.yaml) I'm trying --- :backends: - yaml :logger: console :hierarchy: - "%{::environment}" :yaml: :datadir: /var/lib/hiera Now, I have a file /var/lib/hiera/development.yaml blah: meh When I run hiera it's not finding the file or the value $ hiera -d blah DEBUG: Fri Oct 25 15:50:52 -0600 2013: Hiera YAML backend starting DEBUG: Fri Oct 25 15:50:52 -0600 2013: Looking up blah in YAML backend nil I've verified this agent is configured for development $ sudo puppet agent --configprint environment development Now let me prove hiera is capable of finding something; a change to the hiera.yaml file: :hierarchy: - development And now hiera finds the file and the value $ hiera -d blah DEBUG: Fri Oct 25 15:53:25 -0600 2013: Hiera YAML backend starting DEBUG: Fri Oct 25 15:53:25 -0600 2013: Looking up blah in YAML backend DEBUG: Fri Oct 25 15:53:25 -0600 2013: Looking for data source development DEBUG: Fri Oct 25 15:53:25 -0600 2013: Found blah in development meh So why isn't it working with the dynamic environment configuration? I got that straight from the documentation. Note, I have tried running the hiera command via sudo with no change in the result.

    Read the article

  • localhost won't load after adding config data to httpd

    - by OldWest
    I am not very experienced with configuring httpd, and I am following a tutorial to view my site w/ domain name under localhost. My localhost just blanks out and my apache services won't restart. I checked all of my paths and they are correct. I am editing the w*indows/system32/drivers/etc/host*s file and my apache httpd file. This is what I am putting in my hosts file: 127.0.0.1 www.cars_v1.0.com.localhost And in the footer of my httpd file I am putting this: <VirtualHost 127.0.0.1:80> ServerName www.cars_v1.0.com.localhost DocumentRoot "C:\wamp\www\symfony\cars_v1.0\web" DirectoryIndex index.php <Directory "C:\wamp\www\symfony\cars_v1.0\web"> AllowOverride All Allow from All </Directory> Alias /sf C:\wamp\www\symfony\cars_v1.0\lib\vendor\symfony-1.4.8\data\web\sf <Directory "C:\wamp\www\symfony\cars_v1.0\lib\vendor\symfony-1.4.8\data\web\sf"> AllowOverride All Allow from All </Directory> </VirtualHost>

    Read the article

  • Cant connect to MySQL server from Java application

    - by RN
    This is on VPS\Centos server. The MySQL server is pre configured. I am running the Java application on Tomcat My Java web application is not able to connect to the MySQL server. I get an error - "Caused by: java.net.ConnectException: Connection refused" I suspect this to be a configuration problem rather than a coding problem- hence I have posted this on ServerFault And yes, The same web-app is able to connect to MySQL on a different linux box This is the URL that I provided to my Java application (note- it assumes default port) url = "jdbc:mysql://localhost/pickupgames" My first suspicion was that I am running on a non-default port So I tried to find the port where mySQL server is running I tried every trick mentioned in http://serverfault.com/questions/116100/how-to-check-what-port-mysql-is-running-on But no luck ! SHOW GLOBAL VARIABLES LIKE 'PORT'; This shows port 0 netstat -tlnp doesn't show mysql at all /etc/my.cnf It has no port entry telnet localhost 3306 Doesn't connect And in case you are wondering if mysql server is running at all or not It is And I know for sure, because I have been able to login using the mysql command Also # ps -ef|grep 'mysql' root 31839 27662 0 00:49 pts/3 00:00:00 grep mysql root 32452 1 0 Apr02 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --skip-grant-tables --skip-networking mysql 32504 32452 0 Apr02 ? 00:00:06 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock --skip-grant-tables --skip-networking Please note the --skip-networking parameter Does this have something to do with the issue ? Any explanation why I cant connect to mysql server on port 3306 by telnet? Or why it docent show up under netstat? Any suggestion on whet I should try next ?

    Read the article

  • mcelog doesn't fails to start PUIAS 6.4 amd hardware

    - by Predrag Punosevac
    Folks, I am a total Linux n00b. I am trying to deploy mcelog on one of my computing nodes running PUIAS 6.4 (i86_64) [root@lov3 edac]# uname -a Linux lov3.mylab.org 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 22:40:32 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux a free clone of Red Hat 6.4 on AMD hardware [root@lov3 mcelog]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 4 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 21 Model: 2 Stepping: 0 CPU MHz: 1400.000 BogoMIPS: 4999.30 Virtualization: AMD-V L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 6144K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 My mcelog.conf file is more or less default apart of the fact that I would like to run mcelog as a daemon and to log errors. When I start mcelog [root@lov3 mcelog]# mcelog --config-file mcelog.conf AMD Processor family 21: Please load edac_mce_amd module. However the module is present [root@lov3 mcelog]# locate edac_mce_amd.ko /lib/modules/2.6.32-358.18.1.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko and loaded [root@lov3 edac]# lsmod | grep mce edac_mce_amd 14705 1 amd64_edac_mod Is there anything that I can do to get mcelog working? The only reference I found is this thread http://lists.centos.org/pipermail/centos/2012-November/130226.html

    Read the article

  • How to setup a virtual host in Ubuntu?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • Nagios NTP, discarding peer

    - by picca
    We're using nagios *check_ntp_time* for monitoring time on our servers. Unfortunately the service is flapping. And reporting a lot of false-positives. It happens everytime for random server in random day time and lasts for ~10-30 minutes. When the problem occurs we get: watch01:~ # /usr/lib/nagios/plugins/check_ntp_time -H lb01 -w 1 -c 2 -v sending request to peer 0 response from peer 0: offset 0.07509887218 sending request to peer 0 response from peer 0: offset 0.07508444786 sending request to peer 0 response from peer 0: offset 0.07499825954 sending request to peer 0 response from peer 0: offset 0.07510817051 discarding peer 0: stratum=0 overall average offset: 0 NTP CRITICAL: Offset unknown| When everything is ok, we get (I used different server to not have to wait): watch01:~ # /usr/lib/nagios/plugins/check_ntp_time -H web02 -w 1 -c 2 -v sending request to peer 0 response from peer 0: offset 0.0002282857895 sending request to peer 0 response from peer 0: offset 0.0002194643021 sending request to peer 0 response from peer 0: offset 0.0002347230911 sending request to peer 0 response from peer 0: offset 0.0002293586731 overall average offset: 0.0002282857895 NTP OK: Offset 0.0002282857895 secs|offset=0.000228s;1.000000;2.000000; We are using: check_ntp_time v1.4.15 (nagios-plugins 1.4.15) on Debian squeeze. Remote ntp daemon is: ntpd - NTP daemon program - Ver. 4.2.4p4 I already found some forums where the problem is described: 1, 2, 3. Every time they edvise to upgrade nagios-plugins, because in version prior to 1.4.13 there was a bug with inserted leap second. But we have already newer version of nagios-plugins.

    Read the article

  • Apache2 on Raspbian: Multiviews is enabled but not working [closed]

    - by Christian L
    I recently moved webserver, from a ubuntuserver set up by my brother (I have sudo) to a rasbianserver set up by my self. On the other server multiviews worked out of the box, but on the raspbian it does not seem to work althoug it seems to be enabled out of the box there as well. What I am trying to do is to get it to find my.doma.in/mobile.php when I enter my.doma.in/mobile in the adress field. I am using the same available-site-file as I did before, the file looks as this: <VirtualHost *:80> ServerName my.doma.in ServerAdmin [email protected] DocumentRoot /home/christian/www/do <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/christian/www/do> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> From what I have read various places while googling this issue I found that the negotiation module had to be enabled so I tried to enable it. sudo a2enmod negotiation Giving me this result Module negotiation already enabled I have read through the /etc/apache2/apache2.conf and I did not find anything in particular that seemed to be helping me there, but please do ask if you think I should post it. Any ideas on how to solve this through getting Multiviews to work?

    Read the article

  • How can I password protect & let cgi-bin to work?

    - by jaaaaaaax
    This is taken from sites-available directory. It's a virtual host setting for apache. Accessing myiphere/cgi-bin/ throws 403. The directory setting for /var/www2/ drwxrwxrwx 8 www-data www-data NameVirtualHost myiphere <VirtualHost myiphere> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory>

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >