Search Results

Search found 3423 results on 137 pages for 'mod authz ldap'.

Page 122/137 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Apache Reverse proxy for intranet and other integrated application on intranet

    - by user1433448
    I'm trying to configure a reverse proxy (ssl) with apache 2.2 in Debian Squeeze, but I have some problems, specially with some path absolute and with https I'll try to detail what I have made and what I'm trying to configure I have a server Debian Squeeze with apache2.2 + mod_proxy_html with: # apt-get install libapache2-mod-proxy-html libxml2-dev # a2enmod proxy # a2enmod proxy_http # a2enmod proxy_html # a2enmod headers After that I have configured a virtual host with: reverse_proxy_ssl.conf I'm trying to configure to allow access of our intranet from internet with a reverse proxy (apache that is located in DMZ). With this configuration domain.com/intranet works correctly and we can access to intranet, but we have one problem when from domain.com/intranet we need to use another internal application that is called from intranet with absolute path ( https://192.168.10.25/application/) and from internet appears that try to access with internal ip, and this link es incorrect from external site We only need to access from intranet to multiple internal application that are in external server and we like to restrict to minimal access from internet. All the application that are in the smae server of intranet are working. The second problem is with https and reverse proxy in our firewall appears some errors with packets (not valid packets), and with https seems to work. What can I do to solve this problems (absolute path and ssl problem) Thanks

    Read the article

  • Debian apache2 restart fault after some updates

    - by Ripeed
    can anyone give me an advice with this please: I run update on my debian server by Webmin. After updating some apache2 and etc. It shows update fail. After that I cant start apache2. I must run netstat -ltnp | grep ':80' Then kill pid kill -9 1047 and now i can start apache2 When I started it first time after update some websites on fastCGI wont work I must change them in ISPconfig3 to mod-PHP and now works NOW - I cant restart apache without kill pid. In log of ISP I see Unable to open logs (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down In log of some website I see [emerg] (13)Permission denied: mod_fcgid: can´t lock process table in pid 19264 Do you thing it will be solution update everithing by: apt-get update and apt-get upgrade to complete all updates? I have little scare if I do that then next errors will occur. If I look at apache log i see error: Debian Python version mismatch, expected '2.6.5+', found '2.6.6' But that was there before that problem before. Thanks A LOT for help.

    Read the article

  • Switching from prefork MPM to worker MPM + php-fpm on ubuntu

    - by Shane
    All tutorials I found were how to fresh install worker MPM + PHP-FPM, since my wordpress blog's already up and running with prefork MPM, correct me if I'm wrong in the simulated installation process: I'm on ubuntu and according to some tutorials, the following lines would do all the tricks: apt-get install apache2-mpm-worker libapache2-mod-fastcgi php5-fpm php5-gd a2enmod actions fastcgi alias Then you setup configuration in /etc/apache2/conf.d/php5-fpm.conf: <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> After all these, restart: service apache2 restart && service php5-fpm restart Question: 1) Would it cause any down time in the whole process for previously running sites with prefork MPM? 2) Do you have to change any already existent configuration files like php or mysql or apache2(would they take effect immediately after the switch without you doing anything)? 3) I've already have apc up and running, do you have to re-install/re-configure it after the switch? 4) How do you find out if apache2 is working in worker MPM mode as expected? Thanks a lot!

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • Where to get grub files without using grub-install

    - by Jacky
    I am in a particular situation. I have a MacBook Pro with no internal CD drive and both MacOS X (minimal setup) and Linux (my main system) is installed. During a cross-upgrade to Ubuntu 12.04 I messed up grub, so that my /boot/grub directory is basically empty. This means I can't boot Linux on the laptop anymore but only get into grub rescue. Normally this is no issue as you'd just boot from a rescue CD or USB stick, but unfortunately with a MacBook Pro this is not possible (I have reFIT installed and it attempts to boot, but it fails and the manual says that Apple's EFI firmware is not able to handle this situation). From MacOS X, however, I still have write access to the Linux partition. I've now been trying to figure out how to populate the /boot/grub folder with the necessary files, to no avail so far. The ISO image of Ubuntu 12.04 contains an EFI folder which is not what I am looking for, instead I need the normal.mod files for the grub version of Ubuntu 12.04. I do not have any other machine to set up a virtual machine of Ubuntu 12.04 to extract this from after a grub-install, so I am asking for ideas here how to solve this mess. P.S.: I installed the Linux previously when I still had a working internal CD drive. This is gone now.

    Read the article

  • installing lots of perl modules

    - by Colin Pickard
    Hi, I've been landed with the job of documenting how to install a very complicated application onto a clean server. Part of the application requires a lot of perl scripts, each of which seem to require lots of different perl modules. I don't know much about perl, and I only know one way to install the required modules. This means my documentation now looks this: Type each of these commands and accept all the defaults: sudo perl -MCPAN -e 'install JSON' sudo perl -MCPAN -e 'install Date::Simple' sudo perl -MCPAN -e 'install Log::Log4perl' sudo perl -MCPAN -e 'install Email::Simple' (.... continues for 2 more pages... ) Is there any way I can do all this one line like I can with aptitude i.e. Type the following command and go get a coffee: sudo aptitude install openssh-server libapache2-mod-perl2 build-essential ... Thank you (on behalf of the long suffering people who will be reading my document) EDIT: The best way to do this is to use the packaged versions. For the modules which were not packaged for Ubuntu 10.10 I ended up with a little perl script which I found here ) #!/usr/bin/perl -w use CPANPLUS; use strict; CPANPLUS::Backend->new( conf => { prereqs => 1 } )->install( modules => [ qw( Date::Simple File::Slurp LWP::Simple MIME::Base64 MIME::Parser MIME::QuotedPrint ) ] ); This means I can put a nice one liner in my document: sudo perl installmodules.pl

    Read the article

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • Intermittent extrememly long response times when downloading documents

    - by pap
    I have a Java web application running om Tomcat 7 with an Apache httpd 2.2 fronting with mod_jk/AJP. One part of the application is serving files (up to 4mb size). Now, normally this all runs very smooth with stable, low response-times. However, in rare instances (<0.1% of downloads), the downloadtime will go beyond 1 minute. After activating the ThreadStuckValve in Tomcat, I can see that the long responses seem to be stuck at org.apache.tomcat.jni.Socket.sendbb(Native method) i.e network I/O. At most, these long-running downloads take 5 minutes, which I strongly suspect is because of the default 300 second timout in Apache 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html, "TimeOut directive"). To me, this looks like network problems. The Apache timeout (if that is what is kicking in at the 5 minute mark) indicates that ACK packets are not being transmitted correctly. My questions are what could be causing this? Closed browser at receiving end but socket not signaled as closed properly? Packet loss or some other network failure in transit? Where would I start troubleshooting this? We're running Tomcat and Apache on Windows server 2008-R2 in a vmware virtualized server.

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

  • Unable to get ejabberd prebind to work

    - by cdecker
    I'm trying to get the prebind of BOSH sessions to work. I want to be able to authenticate a user in my CMS and then log him in when he accesses the chat, for this I found https://github.com/smokeclouds/http_prebind, it all works find and I was able to compile it with the following steps: rake configure sed -i 's/AUTH_USER/a_user/g' src/http_prebind.erl sed -i 's/AUTH_PASSWORD/a_password/g' src/http_prebind.erl sed -i 's/EJABBERD_DOMAIN/jabber.my.tld/g' src/http_prebind.erl rake build rake install And then adding the http request bindings to the configuration: {5280, ejabberd_http, [ {request_handlers, [ {["http-prebind"], http_prebind} ]}, %%captcha, http_bind, http_poll, http_prebind, web_admin ]} ]}. As far as I understand it I should now be able to simply request a new session like this: curl -u a_user:a_password http://jabber.my.tld:5280/http-prebind/some_user But no matter what I always get Unauthorized as response. Any idea about this one? PS: I also tried Mod-Http-Pre-Bind, but as it does not require a password I would prefer to use http_prebind. PPS: Does the user with username AUTH_USER and password AUTH_PASSWORD actually have to exist? I'm currently using an admin account.

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • Relaying to tech "support" that computer is actually broken.

    - by Sion
    First some background: I have a Dell Inspiron 15R M050, it is still under the Dell limited warranty and the Best Buy Extended warranty. I am currently dual booting Debian Squeeze and Windows 7, the only reason I go into Windows is to play video games specifically steam games. Issue: When I play my games in Windows I am capable of playing for anywhere from 5 minutes to 2 hours before I suffer a hard-lock. I cannot alt-tab, ctrl-alt-delete, ctrl-shift-escape do anything for 2-3 minutes. After this hard-lock period everything runs fine, I can continue the game for probably another hour at least before I suffer another lock. Games: Borderlands, Splinter Cell: Chaos Theory, Starcraft 2, Garrys Mod What I have tried: Running the diagnostic suite in the dell bios, restoring the OEM Windows recovery partition on the HD, fresh installing Windows 7 Professional, updating BIOS, Calling tech support and having them run a software Hardware Diagnostics suite. The question: I think from the research that I have performed that it might be a lack of thermal paste on the CPU, would I be able to go to Best Buy and have them do a hardware diagnostic from the hardware level then have them be able to tell Dell that there is a hardware issue? Or would there be a different problem?

    Read the article

  • Arch Linux: How to handle patches which only you will use?

    - by user12932
    I'm using freerdp together with xmonad and it has been giving me a lot of trouble. The super key (or "windows key") is my mod key in xmonad and it has been interfering with my freerdp usage rather annoyingly. Whenever I switched workspaces (or did anything else in xmonad involving the super key), windows (controlled by the freerdp instance in focus) registered a keypress as well. This event combined with the loss of focus got the super key stuck in windows indefinitely: the press of the keys d and r would first show my desktop, then open the run dialog (as if I was pressing the windows key constantly). I've tried several versions of freerdp, but all exhibited this annoying behavior. So I resorted to patching freerdp myself to just ignore the left super key on my keyboard. I love free software for a lot of reasons (especially the ability to alter things like this myself), however I still find it annoying to patch and rebuild freerdp on all version (and dependency) changes. How do you deal with situations like this? Is there even a "right way" to resolve this issue?

    Read the article

  • Move a screen session back to its original PID

    - by cron410
    Installed McMyAdmin (minecraft manager) on Ubuntu 12.04 32 bit. Wrote my own service to start McMyAdmin (.net app running in Mono) in its own screen session, and be able to inject proper McMyAdmin commands into that session with the init.d script. Its been running great! Today, I decided to start installing a Source dedicated server (counterstrike pro mod) I determine its going to be a long download process so I quit the process and fire up a fresh screen session called "source". I paste the command in, but it has a space at the begining and bash complains of ignoring semaphores or some such. I detach and reattach the session. Its sliding like butter. I ctrl+a-d out of the session and start exploring the new folder structure and figure out where I need to place a symbolic link. I go to resume that screen and this is what I see: $screen -r source There are several suitable screens on 20091.source (12/02/12 22:59:53) (Detached) 19972.source (12/02/12 22:57:31) (Detached) 917.minecraft (11/30/12 15:30:37) (Attached) It appears I am connected to the minecraft screen?!?!?! So I attach to the other screens one at a time. minecraft is running in 19972.source and sourceds is running in 20091.source So how the hell did I move the minecraft process to another session called source and my main terminal is now "attached" to the minecraft screen? more: I just used exit to quit the putty session, then logged back in, its still the same. did that 3 more times and now the minecraft screen is gone and everything is acting as it should except, of course, for the session name and start time of the "new" minecraft screen. Should I just submit this as a bug for GNU screen?

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • Is there a faster way to change default apps associated with file types on OS X?

    - by Lri
    Is there anything more convenient than using RCDefaultApp or Magic Launch, or just repeatedly pressing the Change All buttons in Finder's information panels? I thought about writing a shell script that would modify the CFBundleDocumentTypes arrays in Info.plist files. But each app has multiple keys (sometimes an icon) that would need to be changed. lsregister can't be used to make specific modifications to the Launch Services database. $ `locate lsregister` -h lsregister: [OPTIONS] [ <path>... ] [ -apps <domain>[,domain]... ] [ -libs <domain>[,domain]... ] [ -all <domain>[,domain]... ] Paths are searched for applications to register with the Launch Service database. Valid domains are "system", "local", "network" and "user". Domains can also be specified using only the first letter. -kill Reset the Launch Services database before doing anything else -seed If database isn't seeded, scan default locations for applications and libraries to register -lint Print information about plist errors while registering bundles -convert Register apps found in older LS database files -lazy n Sleep for n seconds before registering/scanning -r Recursive directory scan, do not recurse into packages or invisible directories -R Recursive directory scan, descending into packages and invisible directories -f force-update registration even if mod date is unchanged -u unregister instead of register -v Display progress information -dump Display full database contents after registration -h Display this help

    Read the article

  • How can I make grub2 boot into Windows 7?

    - by Grzenio
    I had Windows 7 installed on my system, then I installed Debian testing with grub2 as its boot manager. Initially I couldn't see windows entry in grub at all, so I ran: aptitude install os-prober kcpuload update-grub Now I can see the entry, but when I select it I get only Win7 system restore, instead of the the real thing. Any ides how to make it work? EDIT: I tried the suggested approach to add a new file to /etc/grub.d, which generated an entry in grub.cfg, but it does not appear in the grub menu on boot :( I have this: grzes:/home/ga# cat /etc/grub.d/11_Windows #! /bin/sh -e echo Adding Windows >&2 cat << EOF menuentry “Windows 7? { set root=(hd0,2) chainloader +1 } And I have the following grub.cfg file: grzes:/home/ga# cat /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } insmod ext2 set root=(hd0,3) search --no-floppy --fs-uuid --set 6ce3ff31-0ef7-41df-a6f5-b6b886db3a94 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi set locale_dir=/boot/grub/locale set lang=en insmod gettext set timeout=5 ### END /etc/grub.d/00_header ###

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly [closed]

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • Optimizing Apache for large file serving

    - by D_Guy13
    I have a random problem with Apache that I can't quite figure out, here is my setup, Windows Server 2008 R2, 64 Bit, 5GB RAM, SSD with 200 MB(Read/write) and Dual Core CPU @ 2.1 GHz A dump from mod-staus, Server Version: Apache/2.4.7 (Win32) mod_limitipconn/0.24 mod_antiloris/0.5.2 PHP/5.5.9 Server MPM: WinNT Apache Lounge VC11 Server Built: Nov 21 2013 20:13:01 Current Time: Thursday, 21-Aug-2014 23:38:06 W. Europe Daylight Time Restart Time: Thursday, 21-Aug-2014 20:30:47 W. Europe Daylight Time Parent Server Config. Generation: 1 Parent Server MPM Generation: 1 Server uptime: 3 hours 7 minutes 18 seconds Server load: -1.00 -1.00 -1.00 Total accesses: 283025 - Total Traffic: 1172.2 GB 25.2 requests/sec - 106.8 MB/second - 4.2 MB/request 62 requests currently being processed, 388 idle workers Serving large .zip & iso files using mod_xsendfile. (File size range 500 MB - 1.5 GB) The setup works and is running fine. CPU usage is very unstable, jumps all the time between 10% - 90% and the servers goes down when it hits 100%. In that case I have to hard restart the server. Server it outputting traffic at 30 Mbps. Is there anything else I should think about to get a more stable CPU usage? Is that CPU usage normal? Can switching to Linux help me achieve better CPU usage?

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >