Search Results

Search found 11601 results on 465 pages for 'obiee installs and config'.

Page 261/465 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • New LAMP server, all links redirecting to localhost

    - by serilain
    I've got a very frustrating issue with what should be a bespoke install of Ubuntu 12.04, the LAMP config provided in apt-get install lamp-server^, and a web application called The Fascinator. After installing those three things and making no changes to any of them, I can access the application through a public IP (http://lib-hf1.lib.sfu.ca:9997 for the curious), but the domain of every link within that page is changed to localhost, including links to images and CSS, so nothing loads correctly and all of the links are broken. I've Googled around and found some people who appear to be having this issue with WP and Drupal, but nothing makes reference to a system-wide setting, and no one using the Fascinator seems to be having this issue. I have a faint memory that this might have something to do with mod_rewrite, but I'm pretty well stumped.

    Read the article

  • 3 or 4 monitors with Nvidia and Ubuntu

    - by Jason
    I saw that you are (were?) running 4 monitors with Ubuntu 8.10 and two Nvidia cards (http://stackoverflow.com/questions/27113/how-to-use-3-monitors). I was curious if you were doing this with Xinerama, a hacked up TwinView config, or multiple X screens, or some other method? Does it work with compiz? I intend to run my Dell 30" in the middle with two 1280x1024 on the sides and continue to use one X screen, and run compiz, on Ubuntu 9.04. Currently, I am using 2 monitors with twinview and compiz, which runs fantastic. I just can't get the third monitor running (unless I enable it in its own X screen, and then enable Xinerama to enable windows to be dragged as if all one X screen, but this breaks compiz, and I don't care much for having separate X screen). I am very interested in knowing how you set up 4 monitors with 2 GPU's. Thanks!

    Read the article

  • Getting VSFTP running on Fedora 14

    - by Louis W
    Having troubles getting VSFTPD running on Fedora 14. Here is what I have done so far, please let me know if I am missing something. When I try to connect through FTP it says connection time out. Installed VSFTP with yum yum install vsftpd Edited config file vi /etc/vsftpd/vsftpd.conf Started service and made sure it would always start up service vsftpd start chkconfig vsftpd on Added and configured a new user /usr/sbin/useradd upload /usr/bin/passwd upload usermod -c "This user cannot login to a shell" -s /sbin/nologin upload Added firewall rules iptables -A INPUT -p tcp --dport 21 -j ACCEPT iptables -A OUTPUT -p tcp --sport 20 -j ACCEPT service iptables save service iptables restart Checked netstat (In reply to comment below) tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 23752/vsftpd

    Read the article

  • CF64 on Server 2008x64 breaks pretty much everything else, how to fix?

    - by agarren
    I just installed CF64 on a WinServer2008x64 machine. Previously a whole array of classic ASP and ASP.Net apps were functioning, after the CF install they're not. I'm getting an http 500 on everything not coldfusion. I believe it's a mapping issue. CF seems to have dropped a wildcard handler mapping into the IIS config Module IsapiModule Notification ExecuteRequestHandler Handler AboMapperCustom-89420 Error Code 0x800700c1 The upside (if you can call it that) is that the CF install took and seems to be functioning. It appears to have dropped in

    Read the article

  • Reject recipient in postfix mail relay

    - by galets
    I have about 3 knows email addresses in my domain, which don't exist and to which a lot of spam is sent. Some of this spam is pretty heavy, and I'm wasting a lot of traffic on it, so I don't want to even receive emails if their destination is one of those 3 addresses. Since I know that the users don't exist I would like postfix to reject emails during RCPT TO: negotiation. Basically, all I want is to update some config with those 3 addresses, and every email sent to them must fail to come in. I want to stress out following: postfix works as a relay for domain, there is no local users postfix has no knowledge about validity of other emails within domain, so it cannot simply reject unknown recipients

    Read the article

  • IIS7 Custom ASP.NET Errors

    - by Nathan
    I'm trying to setup a custom error page for the IIS 7 404.13 (Content length too large) error. Here's the relevant sections of my web.config file: <system.webServer> <httpErrors errorMode="Custom" existingResponse="Replace"> <remove statusCode="404" subStatusCode="13" /> <error statusCode="404" subStatusCode="13" prefixLanguageFilePath="" path="/FileUpload/Test.aspx" responseMode="ExecuteURL" /> </httpErrors> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10240" /> </requestFiltering> </security> </system.webServer> The response that is being sent back to the server is blank. The Test.aspx file is not blank. Any idea what's going on here?

    Read the article

  • Bash: Variable substitution in variable name with default value

    - by krissi
    i have the following variables: # config file MYVAR_DEFAULT=123 MYVAR_FOO=456 #MYVAR_BAR unset # program USER_INPUT=FOO TARGET_VAR=<need to be set> If the USER_INPUT is "foo", I want TARGET_VAR to be the value of MYVAR_FOO (TARGET_VAR=456). If USER_INPUT is "bar" I want TARGET_VAR to be set to MYVAR_DEFAULT (123), because MYVAR_BAR is unset. I prefer it to be sh-compatible and as a substitution string. But it might also be bash compatible and/or in a function. I got these snippets: # Default values for variable (sh-compatible) echo ${MYVAR_FOO-$MYVAR_DEFAULT} # Uppercase (bash compatible) echo ${USER_INPUT^^} I would need something like this: TARGET_VAR="${MYVAR_${USER_INPUT^^}-$MYVAR_DEFAULT}" # or somecommand -foo "${MYVAR_${USER_INPUT^^}-$MYVAR_DEFAULT}" This is to switch a bunch of variables between multiple "profiles". In the example, FOO and BAR are profiles. New profiles should be added easily, in this example there would be an implicit profile named BAZ, too, all variables to their default values. Unfortunately it is not that easy. Do you have an idea to solve this? Thanks in advance, krissi

    Read the article

  • Easily recreate a server's "state" [closed]

    - by Brandon Wamboldt
    I want the ability to setup new servers for dev/testing/prod very easily. The reasons for being able to setup a new dev VM is obvious, but for prod my concern is adding a new production server/migrating to a new server. I assume a traditional backup solution won't work as hardware may be different so the binaries/config might be different. I want to get experience with puppet anyways, so I was thinking about creating a manifest that would setup my users, install Postgres, Nginx, PHP-FPM, etc, and configure them the way I specify. Then I could install puppet on a new server, copy down my manifest and apply it locally. This would make keeping my server configs in sync easier too. Is there a better approach I'm not aware of, and does my approach have any pitfalls?

    Read the article

  • How to set the PHP Api Version for phpize

    - by Tom Frost
    I'm upgrading php on my server but I'm running into a problem with phpize and compiling external modules. phpize -v reports: Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20090115 Zend Extension Api No: 220090115 But on my test server (which I'm trying to replicate) I get this: Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 I'm running debian squeeze, pulling the php 5.3.0-2 packages from the experimental repo. The difference betweent he two servers is that the first server has had old verisons of php on it, and the test server was installed with php 5.3.0-2 from the start. I've attempted uninstalling all PHP packages from the first server (using --purge to get rid of all the config files) and re-installing 5.3 fresh, but I'm still having the same issue. Help!

    Read the article

  • How to configure custom error page in Plesk 9.3 for non existing folder?

    - by Junior Mayhé
    I'm trying to configure Plesk in order to show website visitors a custom error html. The current hosted site is an ASP.NET site. This site shows its custom errors on error403.aspx and error404.aspx files. Now to comply with plesk, I've created error_docs with required files like forbidden.html, etc... When user try to navigate http://mysite.com/a_missing_page.aspx, the visitor is redirected to error404.aspx correctly. But when user try to navigate to a non existent directory http://mysite.com/a_missing_folder/ the site takes me to IIS 404 regular page. Plesk has Custom error documents activated on Web hosting settings. ASP.NET Error pages defined in web.config are showing fine. But it seems plesk wont show its custom html error documents. The bottom line here is about setting up a custom error page to a directory. Is it possible to do this using Plesk or do I have to change it manually on IIS?

    Read the article

  • What is the correct configuration for multiple apache2 vhosts and multiple php5-fpm pools?

    - by farinspace
    I have a group of sites (group A) which I would like to run using one php5-fpm pool and a second group of sites (group B) which I would like to run using a second php5-fpm pool. I can effectively define/create the pool in the fpm.conf file and I confirmed that it is running with the different user/group I've defined. However I am unclear as to how to setup the apache virtual host config. I've tried a few apache2 configurations but I seem to not be able to add the second pool. If you've done this please help.

    Read the article

  • kerberos5 unable to authenticate

    - by wolfgangsz
    We have a Debian file server, configured to serve up samba shares, using winbind and kerberos. This is configured to authenticate against a Windows2003 DC. All worked fine until recently when I did a maintenance update on all packages. Since then, all attempts to connect to any of the shares (and also to just log into the box) fail. The logs contain this message, which seems to be at the root of the evil: [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:get_krb5_smb_session_key(685) Got KRB5 session key of length 16 [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:unwrap_pac(280) authorization data is not a Windows PAC (type: 141) [2009/09/14 12:04:29, 3] libads/kerberos_verify.c:ads_verify_ticket(430) ads_verify_ticket: did not retrieve auth data. continuing without PAC From there on it fails to find the user account on the DC, subsequently remaps the user to user nobody and then (rightly) refuses to grant access to the share. However, the following works just fine: wbinfo -a user%password I was wondering whether anybody has had this problem and could provide some insight. I would be happy to provide neutralised config files.

    Read the article

  • IIS7 FTP Setup - An error occured during the authentication process. 530 End Login failed

    - by robmzd
    I'm having a problem very similar to IIS 7.5 FTP IIS Manager Users Login Fail (530) on Windows Server 2008 R2 Standard. I have created an FTP site and IIS Manager user but am having trouble logging in. I could really do with getting this working with the IIS Manager user rather than by creating a new system user since I'm fairly restricted with those accounts. Here is the output when connecting locally through command prompt: C:\Windows\system32>ftp localhost Connected to MYSERVER. 220 Microsoft FTP Service User (MYSERVER:(none)): MyFtpLogin 331 Password required for MyFtpLogin. Password: *** 530-User cannot log in. Win32 error: Logon failure: unknown user name or bad password. Error details: An error occured during the authentication process. 530 End Login failed. I have followed the guide to configure ftp with iis manager authentication in iis 7 and Adding FTP Publishing to a Web Site in IIS 7 Things I have done and checked: The FTP Service is installed (along with FTP Extensibility). Local Service and Network Service have been given access to the site folder Permission has been given to the config files Granted read/write permissions to the FTP Root folder The Management Service is installed and running Enable remote connections is ticked with 'Windows credentials or IIS manager credentials' selected The IIS Manager User has been added to the server (root connection in the IIS connections branch) The new FTP site has been added IIS Manager Authentication has been added to the FTP authentication providers The IIS Manager user has been added to the IIS Manager Permissions list for the site Added Read/Write permissions for the user in the FTP Authorization Rules Here's a section of the applicationHost config file associated with the FTP site <site name="MySite" id="8"> <application path="/" applicationPool="MyAppPool"> <virtualDirectory path="/" physicalPath="D:\Websites\MySite" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:www.mydomain.co.uk" /> <binding protocol="ftp" bindingInformation="*:21:www.mydomain.co.uk" /> </bindings> <ftpServer> <security> <ssl controlChannelPolicy="SslAllow" dataChannelPolicy="SslAllow" /> <authentication> <basicAuthentication enabled="true" /> <customAuthentication> <providers> <add name="IisManagerAuth" enabled="true" /> </providers> </customAuthentication> </authentication> </security> </ftpServer> </site> ... <location path="MySite"> <system.ftpServer> <security> <authorization> <add accessType="Allow" users="MyFtpLogin" permissions="Read, Write" /> </authorization> </security> </system.ftpServer> </location> If I connect to the Site (not FTP) from my local IIS Manager using the same IIS Manager account details then it connects fine, I can browse files and change settings as I would locally (though I don't seem to have an option to upload files). Trying to connect via FTP though either through the browser or FileZilla etc... gives me: Status: Resolving address of www.mydomain.co.uk Status: Connecting to 123.456.12.123:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER MyFtpLogin Response: 331 Password required for MyFtpLogin. Command: PASS ********* Response: 530 User cannot log in. Error: Critical error Error: Could not connect to server I have tried collecting etw traces for ftp sessions, in the logs I get a FailBasicLogon followed by a FailCustomLogon, but no other info: FailBasicLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ErrorCode=0x8007052E StartCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | LogonProvider=IisManagerAuth StartCallProvider SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | provider=IisManagerAuth EndCallProvider SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} EndCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} FailCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ErrorCode=0x8007052E FailFtpCommand SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ReturnValue=0x8007052E | SubStatus=ERROR_DURING_AUTHENTICATION In the normal FTP logs I just get: 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 ControlChannelOpened - - 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 USER MyFtpLogin 331 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 PASS *** 530 1326 41 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 ControlChannelClosed - - 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - If anyone has any ideas than I would be very grateful to hear them. Many thanks.

    Read the article

  • Redirect email based on Source IP / Hostname

    - by PostMan
    So, we have a situation where we would like to be able to redirect (ignoring original destination) emails coming through out exchange server (locally sent) to a different email address. The situation is as follows: Testing a new server, this hosts a lot of our custom applications / scheduled tasks (as does the production version of this server). Instead of changing config files / parameters for these applications we'd rather leave them as they are, as we want to be as close to production as possible, however we don't want the emails go to the staff they would normally go to, we'd rather they were sent to a test email address where we are able to confirm the results. The emails look exactly like the production emails (same from / to / cc / bcc) except their source, hence why we'd like to know if the ability to redirect based on hostname exists. Cheers.

    Read the article

  • Force10 S60 remote management

    - by StaringSkyward
    We've got a Force10 S60 switch to replace an older Cisco. I can't find a way to give the switch itself an IP address on the local VLAN so I can ssh to it. The config guide talks about using either a management interface on a separate management network or dedicating e.g. a gigabit port as a management port with a dedicated IP address. Ideally I would like to do what we do currently with the Cisco switches, which is in effect give the entire switch an IP so it can be reached from any host on the same VLAN without having to use up a physical port on the switch or physically connect the management port to another device. Is this possible with the S60 and if so, how would you give it, say the address 10.0.1.1 in vlan 10 (10.0.1.1/24)? Thanks!!!

    Read the article

  • SYSLOG-NG - Having trouble with a destination

    - by Samuurai
    Hi, I'm trying to set up a seperate log file for all windows messages. I've set up a match for MSWinEventLog, but it's completely ignoring my configuration Here's my config, which is straight after the src object filter f_windows { match("MSWinEventLog"); }; destination winFIFO { file("/var/log/splunk/syslog-ng/winFIFO"); }; log { source(src); filter(f_windows); destination(winFIFO); flags(final); }; It all ends up in this one instead: filter f_messages { not facility(news, mail) and not filter(f_iptables); }; destination messages { file("/var/log/messages"); }; log { source(src); filter(f_messages); destination(messages); }; Can anyone see what i'm doing wrong?

    Read the article

  • firehol (firewall) with bridge: how to filter

    - by Leon
    I have two interfaces: eth0 (public address) and lxcbr0 with 10.0.3.1. I have a LXC guest running with ip 10.0.3.10 This is my firehol config: version 5 trusted_ips=`/usr/local/bin/strip_comments /etc/firehol/trusted_ips` trusted_servers=`/usr/local/bin/strip_comments /etc/firehol/trusted_servers` blacklist full `/usr/local/bin/strip_comments /etc/firehol/blacklist` interface lxcbr0 virtual policy return server "dhcp dns" accept router virtual2internet inface lxcbr0 outface eth0 masquerade route all accept interface any world protection strong #Outgoing these protocols are allowed to everywhere client "smtp pop3 dns ntp mysql icmp" accept #These (incoming) services are available to everyone server "http https smtp ftp imap imaps pop3 pop3s passiveftp" accept #Outgoing, these protocols are only allowed to known servers client "http https webcache ftp ssh pyzor razor" accept dst "${trusted_servers}" On my host I can connect only to "trusted servers" on port 80. In my guest I can connect to port 80 on every host. I assumed that firehol would block that. Is there something I can add/change so that my guest(s) inherit the rules of the eth0 interface?

    Read the article

  • Virtual DNS recommended setup...

    - by luison
    Hi. We are new to virtualization which we are setting up with Proxmox VE (OpenVZ + KVM). I am a bit lost about the recommended DNS forwarder config specially in the OpenVZ (Virtuosso type) of enviroiment. Our intention was to have a small dnsmasq running in one of the VM acting as backup DHCP server and serving our in-office local addresses (and PCs) by an additional resolve.conf file which dnsmasq supports, but I've read that all VM should share DNS pointing to the host machine in which case it would make more sense having it there. My problem is that I would like to have as least as possible apps in the host so a reinstall of the environment (porxmox ve) and a machine restore can be as quick as possible. Does anyone have a similar setup? Does it make sense to have the 1st virtual machine running the local dns forwarder? Also... dnsmasq seems to want to have root permissions when running on an OpenVZ container... are there any work arrounds anyone knows for that.

    Read the article

  • How do I tar dot files but not dot directories

    - by bjackfly
    The following tar command will exclude all dot files and dot directories. tar -cvzf /media/bjackfly/bkup/bkup.gz --exclude '.*' --one-file-system /home/bjackfly In my case I want the dot files to be backed up in the home directory (.vimrc, .bashrc) etc. but not the dot directories /.config /.cache /.eclipse etc. Any Linux gurus with a command for this, or do I need to run a find into a tar or do two different tar commands which is non-ideal? One for dot files in the home directory and one for everything else?

    Read the article

  • How To Start NX/VNC Session At Ubuntu Boot Time?

    - by darkAsPitch
    I want an NX or VNC session to start automatically when an Ubuntu server boots up - without a monitor connected - loading a certain user's desktop and keeping it loaded and ready until I log in via NX or VNC. How would one accomplish that? This code works via my terminal when I am logged in as the NX user, but not via root, and not in the init.d folder. No idea why? /usr/NX/bin/nxclient --session /home/user/.nx/config/SavedSession.nxs Please provide somewhat simplified instructions! I am certified linux newb.

    Read the article

  • CentOS 6.5 as WebServer for Django Dev

    - by Charlesliam
    During CentOS 6.5 Installation I choose WebServer type for this computer. The server has a static IP address 192.168.111.100. The CentOS was updated I managed to install virtualenv with Python 2.7. Within the virtualenv, I'll be using Django Framework. After I tried to run the command using root user python manage.py runserver 0.0.0.0:8000 I can't see the website from other computer within the LAN when I try to type 192.168.111.100:8000/admin on my browser. I already disable firewall using service iptables stop I can ping the 192.168.111.100 and I have a good feedback with nslookup. What seems the problem of my config?

    Read the article

  • PostgreSQL: performance descrease due to index bloatper

    - by Henry-Nicolas Tourneur
    I'm running a PgSQL 8.1 on a CentOS 4.4 system (not upgradable unfortunately). There's a Java app running on top of the PgSQL daemon and we got to reindex the database every 2 months or so. Also important: the database isn't growing. It looks like the bloat is now coming faster than before and this tends to increase. My config is available here, autovacuum daemon is enabled and running quite often: pastebin.com/RytNj7dK You can also find the output of this query wiki.postgresql.org/wiki/Show_database_bloat 3 hours after running reindex: http://pastebin.com/raw.php?i=75fybKyd 72 hours after running reindex: http://pastebin.com/raw.php?i=89VKd7PC Does anyone have any idea what should I tweak to get rid of that growing bloat? Thanks for your help PS: due to antispam prevention system, I had to remove the first 2 http:// prefixes for my two first links.

    Read the article

  • Asp.net 4.0 Handler Mappings Missing in IIS7

    - by Marc
    I have two Windows 2008 R2 Servers running an asp.net 4.0 app. The server that is having problems actually loads asp.net pages just fine, but if there are any ajax calls they don't work. I noticed there are no .net 4.0 specific Handler Mappings in IIS for this server like the other server has. It's literally missing all .net 4.0 mappings (.axd, .soap, .cshtm, .ashx and even .aspx). I've tried running "aspnet_regiis -ir" but that didn't help. Should I reinstall the .net 4.0 framework? Manually add all these missing mappings? Is there something else going on? What I don't want to do is add a ton of handlers to a web.config, they aren't needed on the server that works so it shouldn't be needed on the broken one.

    Read the article

  • How do i find out what's preventing delete requests from working in iis7.5 and iis8?

    - by Simon
    Our site has an MVC Rest API. Recently, both the live servers and my development machine stopped accepting DELETE requests, instead returning a 501 Not Implemented response. On my development machine, which is Windows 7 running IIS7.5, the solution was to add these lines to our Web.config, under system.webServer / handlers: <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> ... <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE" type="System.Web.Handlers.TransferRequestHandler" resourceType="Unspecified" requireAccess="Script" preCondition="integratedMode,runtimeVersionv4.0" /> However, this didn't work on any of our live servers; not on Sever 2008 + IIS7.5 and not on Server 2012 + IIS8. There are no verbs set up in Request Filtering, and WebDAV is not installed on any of our live servers. The error page gives no further information, and nothing gets recorded in the logs. How do I find out what's preventing DELETE requests from working in iis7.5 and iis8?

    Read the article

  • Different settings for secure & non-secure versions of Django site using WSGI

    - by Jordan Reiter
    I have a Django website where some of the URLs need to be served over HTTPS and some over a normal connection. It's running on Apache and using WSGI. Here's the config: <VirtualHost example.org:80> ServerName example.org DocumentRoot /var/www/html/mysite WSGIDaemonProcess mysite WSGIProcessGroup mysite WSGIScriptAlias / /path/to/mysite/conferencemanager.wsgi </VirtualHost> <VirtualHost *:443> ServerName example.org DocumentRoot /var/www/html/mysite WSGIProcessGroup mysite SSLEngine on SSLCertificateFile /etc/httpd/certs/aace.org.crt SSLCertificateKeyFile /etc/httpd/certs/aace.org.key SSLCertificateChainFile /etc/httpd/certs/gd_bundle.crt WSGIScriptAlias / /path/to/mysite/conferencemanager_secure.wsgi </VirtualHost> When I restart the server, the first site that gets called -- https or http -- appears to select which WSGI script alias gets used. I just need a few settings to be different for the secure server, which is why I'm using a different WSGI script. Alternatively, it there's a way to change settings in the settings.py file based on whether the connection is secure or not, that would also work. Thanks

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >