Search Results

Search found 9202 results on 369 pages for 'package structuring'.

Page 288/369 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • systemd: enabling cherokee service as a `unit file`

    - by Calvin Cheng
    So I am learning how to use systemd to initialize my services automatically on server reboot. So of course, I first make sure I have systemd and some optional systemd related packages installed. pacman -S systemd initscripts-systemd Installation seems to go well and checking, I can see that systemd and its dependency libsystemd are installed. And the optional package initscripts-systemd is also installed:- [root@li280-195 ~]# pacman -Ss systemd extra/libsystemd 44-5 [installed] systemd client libraries extra/systemd 44-5 [installed] system and service manager extra/systemd-sysvcompat 2-2 sysvinit compat symlinks for systemd community/initscripts-systemd 20120412-1 [installed] Arch specific systemd initialization/bootup scripts for systemd community/systemd-arch-units 20120412-2 Arch specific Systemd unit files Next, I ensure that systemd is loaded up when my server reboots, via grub in grub's /boot/grub/menu.lst file like this:- kernel /boot/vmlinuz root=/dev/xvda ro init=/bin/systemd Rebooting my server to check, all loads up well and I can check that systemd is operational via:- systemctl list-unit-files However, I don't see my cherokee initialization script (which is simply created at /etc/rc.d/cherokee when I installed cherokee earlier via pacman -S cherokee) being listed as one of my unit files. So the question is, how do I do that? How do I put my cherokee initialization script under systemd's control?

    Read the article

  • Possible to host CentOS netinstall files on a local HTTP/FTP?

    - by garlicman
    I'm running XenServer on an Dell R610 and am running into a catch-22. During install from DVD, CentOS can't find the DVD package catalogue. It's a reported error for some, XenServer + CentOS6 + DVD install in some hardware configurations = failed install. Yes, I checked the MD5 and let the disc test pass. In every reported case, the netinstall was the solution. The issue is my net access is required to go through a web proxy that prompts before you can download a file. This naturally breaks any download automation. I've been waiting on our IT to put in an exception rule to allow my lab to bypass the prompt, but it's been over 3 weeks now and they don't seem responsive. (I've been working on this a day or two a week) I want to try and host the netinstall files local in my Xen network. Right now I only have a bunch of Windows based VMs, CentOS won't install so I don't have any Linux tools. I had tried simply hosting all the DVD contents off one of the Windows servers using Mongoose. (I didn't want to setup IIS) I copied them to a hosted sub-directory similar to all the mirrors out there (e.g. http:///centos/6.2/os/i386/) with no auth or anything. Then in the netinstall I correctly pointed to it. I now realize just copying the DVD files over won't work. The repodata will point to a local device, not the site I'm hosting. (e.g. the DVD repodata includes xml that points to where the packages are) Clearly I'm hosting them over HTTP, not from a DVD. Is there an easy way to sort this out? I'm just trying to install CentOS6 on Xen. If there's a turnkey downloadable Xen image with CentOS 6.2 on it, or a downloadable repo image, I'll take that too! Thank you in advance!

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • Windows and file system abstraction - how much does it matter where something comes from?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Bootcamp driver package includes file system drivers (right term?) that enable Windows to access the Mac OS HFS+ partition. AFAIK it's a read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Cannot start Postgres daemon after installing with Yum

    - by Sean the Bean
    I was trying to install Postgres 9.1.4 on Fedora 17 using Yum. If I do: sudo yum install postgres-libs sudo yum install postgres sudo yum install postgis All the installs appear to complete successfully (i.e., no errors), but I cannot start the Postgres daemon using: service postgresql initdb Like the official Postgres download guide says to do (http://www.postgresql.org/download/linux/redhat/). The error says Unknown operation initdb. RPM tells me that it installed psql to /usr/bin/, which I confirmed. It turns out that only a few components installed correctly (psql, pg_dump, pg_configure, and a few others), but most are missing (e.g., pg_ctl and postgres). I've tried several different configurations and had several of my coworkers (with more linux experience than me) look at it, but so far nothing has worked. Two of them have also run into similar issues installing Postgres using apt-get on Ubuntu, which makes me think the rpm isn't doing its job. It seems the only solution to build it from source, which is more robust anyway, but of course it takes longer. I'm wondering, though, if anyone else has run into this issue and/or has successfully installed Postgres on either Fedora or Ubuntu using a package manager like yum or apt-get? Is the rpm broken?

    Read the article

  • Mouse receiver stopped working after pairing mouse to an unifying receiver

    - by mp19uy
    I bought this mouse, logitech m510, which came with a nano receiver (no-unifying). Yes I know, the mouse is supposed to come with a unifying receiver but I bought it knowing that it won't come on it's original package and it will come with a no-unifying receiver. When I received it, everything worked ok but, after connecting the mouse to another computer using an unifying receiver (that also worked ok, btw), then, when I tried to connect back the mouse to my computer using the no-unifying receiver, I couldn't connect it. I tried everything from removing the batteries and reinstalling the drivers to restarting the computer and trying in different computers, but I couldn't connect them. What I think it happened, is that in fact if you check the documentation of the logitech m510 it says that it works with unifying receivers only, and even more, there is the following article explaining it: http://logitech-en-amr.custhelp.com/app/answers/detail/a_id/18001/~/using-my-m510-with-a-different-usb-receiver So my theory is that the problem was connecting it to a unifying receiver, and now, isn't recognize by another receiver. The receiver (the no-unifying one) itself is recognized by windows, and if I connect the mouse using a unifying receiver, it works. I would like to know if there is any know solution for this or if I can try something else to see if I can solve this.

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • samba4 not building in Arch

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c Any ideas as what is causing my build to fail? I assume it's an issue with manpages I can't figure out exactly what package it is looking for that I don't have.

    Read the article

  • How to work around blocked outbound hkp port for apt keys

    - by kief_morris
    I'm using Ubuntu 9.10, and need to add some apt repositories. Unfortunately, I get messages like this when running sudo apt-get update: W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 5A9BF3BB4E5E17B5 W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1DABDBB4CEC06767 So, I need to install the keys for these repositories. Under 9.10 we now have the option to do this: sudo add-apt-repository ppa:nvidia-vdpau/ppa See this Ubuntu help article for details. This is great, except that I'm running this on a workstation behind a firewall which blocks outbound connections to pretty much all ports except those required by secretaries running Windows and IE. The port in question here is the hkp service, port 11371. There appear to be ways to manually download keys and install them on apt's keyring. There may even be a way to use add-apt-repository or wget or something to download a key from an alternative server making it available on port 80. However, I haven't yet found a concise set of steps for doing so. What I'm looking for is: How to find a public key for an apt-package (recommendations for resources which have these, and/or tips for searching. Searching for the key hash doesn't seem all that effective so far.) How to retrieve a key (can it be done automatically using gpg or add-apt-repository?) How to add a key to apt's keyring Thanks in advance.

    Read the article

  • Concatenating ogg video files from the command line

    - by Noufal Ibrahim
    Okay. I've got a few ogg files I've created using a desktop recording tool. I've transcoded them using ffmpeg once (mainly to clip out the beginnings and the ends). Now, I have 3 such files which I want to concatenate into a single .ogv file. I tried using oggCat, it crashed with some kind of error (I tried concatenating a file to itself using oggCat and that failed too leading me to believe that my distro is shipping a broken version of the package). Simply cating the files works but I can't seek which is not cool. mencoder run like this mencoder -ovc lavc -oac lavc file1.ogv file2.ogv file3.ogv -o complete.ogv. It transcodes the files into an avi and clips off a little of the 3 videos. So, how do I do this? Update 1: My current workaround is to transcode the 3 files into .mpg using ffmpeg, then cating them together and then transcoding them back into ogv. Update 2: PiTiVi works for this kind of thing but I need something from the command line that I can automate and script.

    Read the article

  • Can't ping through default gateway

    - by Andrew G.H.
    I have the following configuration: Routing table on M3 is: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.3.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 Routing table on M1 is: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 So basically M3's gateway is M1, and M1's gateway is M2's wireless internet interface. If I ping 8.8.8.8 from M1, everything is ok, replies are received. Pinging from M1 to M3 and viceversa is also possible. I have configured M1 as gateway trafic forwarder using firestarter package and stopped firewall with it. iptables policies are ACCEPT for everything. Problem: I have tried ping-ing ip 8.8.8.8 from M3 but without success. What could be the source of this problem?

    Read the article

  • How can I enable pid and ppid fields in psacct dump-acct?

    - by annavt
    I am currently using the psacct package on Centos to perform accounting on processes run by users. The info file1 suggests that it is possible to output pid and ppid depending on what information your operating system provides in it's struct acct. pid and ppid are listed in /usr/include/linux/acct.h on my system: struct acct_v3 { char ac_flag; /* Flags */ char ac_version; /* Always set to ACCT_VERSION */ __u16 ac_tty; /* Control Terminal */ __u32 ac_exitcode; /* Exitcode */ __u32 ac_uid; /* Real User ID */ __u32 ac_gid; /* Real Group ID */ __u32 ac_pid; /* Process ID */ __u32 ac_ppid; /* Parent Process ID */ ... But pid and ppid are not output when I run dump-acct: # dump-acct /var/account/pacct.1 | tail awk | 0.0| 0.0| 81.0| 0| 0|8792.0|Thu Nov 24 04:03:04 2011 tmpwatch | 0.0| 0.0| 1.0| 0| 0|3816.0|Thu Nov 24 04:03:04 2011 cups | 0.0| 0.0| 4.0| 0| 0|8728.0|Thu Nov 24 04:03:04 2011 awk | 0.0| 0.0| 4.0| 0| 0|8792.0|Thu Nov 24 04:03:04 2011 runlevel | 0.0| 0.0| 0.0| 0| 0|3804.0|Thu Nov 24 04:03:04 2011 chkconfig | 0.0| 0.0| 0.0| 0| 0|3840.0|Thu Nov 24 04:03:04 2011 inn-cron-expire | 0.0| 0.0| 0.0| 0| 0|8728.0|Thu Nov 24 04:03:04 2011 awk | 0.0| 0.0| 0.0| 0| 0|8792.0|Thu Nov 24 04:03:04 2011 gzip | 5.0| 0.0| 9.0| 0| 0|4044.0|Thu Nov 24 04:03:04 2011 accton | 0.0| 0.0| 1.0| 0| 0| 0.0|Thu Nov 24 04:03:04 2011 Is it likely that there is no support in my kernel for this feature or that my psacct version does not support this? How can I add pid and ppid to my accounting logs? CentOS release 5.6 Kernel 2.6.18-238.19.1.el5 psacct 6.3.2 Thanks in advance Anna

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • Massive Memory Leaks?

    - by Mads
    Hi, I seem to have huge memory leaks, which are confusing me. I'm running fusion 3.1 / Windows 7 on Snow Leopard. It's a clean install with all upgrades applied. I've given fusion 8GB on a 14GB machine. I've installed VS2008 & Eclipse in Windows 7. Nothing unusual. Inside Task Manager in Windows 7, my memory footprint stays reasonable, at <2GB. But in OSX, Activity Monitor shows the footprint of vmware-vmx to be much larger. It starts at 2 GB, which seems fine, but whenever I'm actually doing anything in Windows, vmware-vmx's footprint grows at a few MB per second. After 20 mins or so it's using ~10GB and everything grinds to a halt. Throughout this, Task Manager still says I'm only using 2GB. And whatever I do in windows seems to increase vmware-vmx's memory footprint. Even closing down an application seems to make it go up. So is this par for the course in fusion? I was previously using parallels 3 / Vista under Leopard, and it worked fine. I'd assumed my new fusion config would work better, but this makes it completely unusable. (And apparently I can't even ask tech support unless I buy a support package...) Any advice much appreciated. Thanks

    Read the article

  • All commands stopped working in centos 6.5

    - by Michael
    I have made a big mistake while removing some duplicate packages as it appears to be broken. yum 1036 rpm -e --nodeps glibc-2.12-1.132.el6_5.2.x86_64 1037 rpm -e --nodeps nscd-2.12-1.132.el6_5.2.x86_64 1038 rpm -e --nodeps glibc-common-2.12-1.132.el6_5.2.x86_64 1040 rpm -e --nodeps glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6.x86_64 glibc-headers-2.12-1.132.el6.x86_64 1041 rpm -e glibc.x86_64 1042 rpm -e --nodeps glibc.x86_64 The issue happened after doing 1042 step. None of commands work(including yum, rpm, ls, cp etc) and getting error /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory I thought that installing glibc after removing all the current ones would help to resolve the duplicate package error :( Now I realised that it is used as the C library in the GNU system and most systems with the Linux kernel. It defines the "system calls" and other basic facilities such as open, malloc, printf, exit, etc. Is there any possible solutions other than reinstall? I have lost ssh access. Maybe anything can be done using rescue cd? Thanks

    Read the article

  • Motion - takes snapshot without motion detected

    - by Emmanuel Brunet
    I've been installed the standard motion 3.2.12 package on debian 7.5. I would like to get snapshot ONLY when motion is detected, but it still saves a picture every second without any activity in front of the camera. I'm using a TENVIS JPT3815W IP camera motion.conf here is my configuration file setup_mode off target_dir /media/videos/log/webcam netcam_url http://webcam/snapshot.cgi netcam_tolerant_check on netcam_userpass admin:alpha1237 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion off output_all off # detection settings 1-255 default 32 noise_level 50 # Maximum framerate for webcam streams (default: 1) webcam_maxrate 25 pre_capture 0 framerate 25 gap 30 locate on mail [email protected] text_right "FRONT CAMERA %Y/%m/%d - %T" text_double on ffmpeg_cap_new on ffmpeg_cap_motion on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) quality 90 # Restrict webcam connections to localhost only (default: on) webcam_localhost off # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 Issue when I start motion images are stored in /media/videos/log/webcam nearly every second. I hjust want to get images when a motion is detected and the according video clip Any idea where the configuration fails ?

    Read the article

  • Enabling mod_wsgi in Apache for a Django app on Gentoo

    - by hobbes3
    I installed Apache, Django, and mod_wsgi on Gentoo using emerge (on Amazon EC2). I know that the mod_wsgi is configured in /etc/apache2/modules.d/70_mod_wsgi.conf: <IfDefine WSGI> LoadModule wsgi_module modules/mod_wsgi.so </IfDefine> # vim: ts=4 filetype=apache So in my /etc/conf.d/apache I added the WSGI module: APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D WSGI" But when I try to list the loaded module, mod_wsgi isn't listed. root ~ # apache2 -M | grep wsgi Syntax OK I also know that mod_wsgi isn't loading properly because the Apache configuration file doesn't recognize WSGIScriptAlias. By the way for Django to work I need to include a custom Apache configuration file. Where should I insert the line below? Include "/var/www/localhost/htdocs/mysite/apache/apache_django_wsgi.conf" I currently have that in the httpd.conf file but I feel like that file will get reseted whenever I upgrade Gentoo or related package. EDIT: it seems the mod_wsgi file is located in /usr/lib64/apache2/modules/mod_wsgi.so. Here is my detailed Apache settings: root@ip-99-99-99-99 /usr/portage/eclass # apache2 -V Server version: Apache/2.2.21 (Unix) Server built: Mar 7 2012 06:52:30 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"

    Read the article

  • Piecing together low-powered hardware for an RS-232 terminal server

    - by Fred
    I'm working on reconstructing my Cisco lab for training/educational purposes and I found that the actual terminal server I have is dead. I have a couple of 8-port PCI serial cards which would be more than ample for my lab, but I don't want to leave my personal computer running to be able to access the console ports. Ideally I would access the terminal server remotely, either by SSH/RDP to the box (depending on what OS I go with) or by installing a software package that allows me to telnet directly to a serial port. I know I've found a program that does this under Linux in the past but its name escapes me at the moment. I'm thinking about scavenging for some old hardware, on eBay or something, to put together a low-powered PC. Needs to be something that: Has Low-power consumption Has at least 2 PCI slots (though I certainly wouldn't complain about having more) Has onboard Ethernet (or, if not, another PCI or ISA slot (not shared)) Can be headless once an OS installed (probably Linux) I'm currently leaning towards an old fashioned Pentium (sub-133MHz era) but I am wondering if anybody else knows of another platform/mobo that would suit these needs. Alternatively, I've been considering buying a Raspberry Pi and a big USB hub along with a bunch of USB-Serial adapters but this sounds like it'd get messy quick with cables and adapters all over the place, and I may not even have the same ttyS#'s between boots.

    Read the article

  • CentOS iscsi initiator has session but there is no block device

    - by jcalfee314
    I have installed the scsi-target-utils package on CentOS and I used it to perform a discovery. The discovery did give me an active session. I restarted the iscsi service but I do not see any new devices (fdisk -l). I see in /var/log/messages that my connection is operational now. I'm not sure how to debug this further. Can someone direct me into fixing this? discovery: iscsiadm -m discovery -t sendtargets -p 192.168.0.155 returns: 192.168.0.155:3260,-1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 Just to verify it actually worked: iscsiadm -m session returns tcp: [1] 192.168.0.155:3260,1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 restarting as the directions say to do: service iscsi restart output written to /var/log/message Stopping iscsi: Sep 20 12:14:22 localhost kernel: connection1:0: detected conn error (1020) [ OK ] Starting iscsi: Sep 20 12:14:22 localhost kernel: scsi1 : iSCSI Initiator over TCP/IP Sep 20 12:14:22 localhost iscsid: Connection1:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is shutdown. Sep 20 12:14:22 localhost iscsid: Could not set session2 priority. READ/WRITE throughout and latency could be affected. [ OK ] [root@db iscsi]# Sep 20 12:14:23 localhost iscsid: Connection2:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is operational now Ran a login command: iscsiadm -m node -T iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 -p 192.168.0.155 -l No errors, no logging occurred. Next I compared the output from "fdisk -l|egrep dev" both with the iscsi session and without. There is no difference. I suppose I could just look in /etc/mtab. Any ideas on how I can get an iscsi device?

    Read the article

  • Installing Apache MPM Worker on Centos 5.5

    - by mrmartinblue
    I have a CentOS 5.5. server and am trying to switch from MPM Prefork to MPM worker. I have the standard yum httpd packages installed currently and from my reading I did the following: Uncomment the httpd.worker line in the /etc/sysconfig/httpd file. I also made sure that the httpd.worker file exists in the /usr/sbin/ directory. I also made sure that httpd service is stopped before making the above change. Ensured PHP was disabled for Apache. I'm fine with this and will use FastCGI to handle PHP files once I get the MPM worker up and running. Restart the httpd service, everything starts fine. Do a # httpd -V The console tells me it's still using prefork. If I do a # vi /etc/init.d/httpd the httpd.worker line is still commented out. I've tried changed this as well to no difference. Any suggestions? Things to look at? My application requires the worker MPM so the only choice I can think of is to go with ubuntu or another flavor that has the dedicated apache2-mpm-worker package. Is there something similar in the yum repos somewhere? Thanks in advance!

    Read the article

  • Puppet : How to override / redefine outside child class (usecase and example detailled)

    - by alex8657
    The use case i try to illustrate is when to declare some item (eq mysqld service) with a default configuration that could be included on every node (class stripdown in the example, for basenode), and still be able to override this same item in some specific class (eg mysql::server), to be included by specific nodes (eg myserver.local) I illustrated this use case with the example below, where i want to disable mysql service on all nodes, but activate it on a specific node. But of course, Puppet parsing fails because the Service[mysql] is included twice. And of course, class mysql::server bears no relation to be a child of class stripdown Is there a way to override the Service["mysql"], or mark it as the main one, or whatever ? I was thinking about the virtual items and the realize function, but it only permits apply an item multiple times, not to redefine or override. # In stripdown.pp : class stripdown { service {"mysql": enable => "false", ensure => "stopped" } } # In mysql.pp : class mysql::server { service { mysqld: enable => true, ensure => running, hasrestart => true, hasstatus => true, path => "/etc/init.d/mysql", require => Package["mysql-server"], } } # Then nodes in nodes.pp : node basenode { include stripdown } node myserver.local inherits basenode { include mysql::server` # BOOM, fails here because of Service["mysql"] redefinition }

    Read the article

  • Firefox cannot render icons from Font Awesome webfont set

    - by ADTC
    In Firefox (Windows 7), icons and glyphs that are called from the Font Awesome package do not render properly. An example of this can be seen on the Khan Academy website. Below the video the icons are shown as boxes with hex codes in them. This means that it isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I found this question on Stack Overflow that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However, I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. I've already tried manually downloading the font and adding it to Windows' Fonts folder but this does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • why i cannot download jdk from oracle web site directly without AuthParam?

    - by hugemeow
    that is download with the following command, why it fails to download that file? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin the following command works, but that AuthParam may not work after a while, why? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 how this AuthParam option is implemented? why i cannot download without this parameter? and why i can only get this parameter using explorer? is rewrite used in the oracle server when deal with wget request? why the same command not works after an hour, does the value of AuthParam expired? so how the server check whether the value of AuthParam is expired? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 --2012-09-07 03:51:01-- http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 Resolving download.oracle.com... 23.67.251.50, 23.67.251.57 Connecting to download.oracle.com|23.67.251.50|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2012-09-07 03:51:01 ERROR 403: Forbidden. @KJ-SRS is that kind of CGI program which is used to judge if AuthParam is right? is that possible to download jdk package purely using wget command, and no need to get that AuthParam in explorer

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >