Search Results

Search found 4083 results on 164 pages for 'mod vhost alias'.

Page 147/164 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Doing arithmetic and passing it to the next command

    - by neurolysis
    I know how to do this in /bin/sh, but I'm struggling a bit in Windows. I know you can do arithmetic on 32-bit signed integers with SET /a 2+2 4 But how do I pass this to the next command? For example, the process I want to perform is as follows. Consumer editions of Windows have no native automated sleep function (I believe?) -- the best way to perform a sleep is to use PING in association with the -n switch to get that many seconds, minus one, of sleep. The following command is effective for a silent sleep: PING localhost -n 3 > NUL But I want to alias this into a sleep command. I'd like to have it elegant so that you enter the actual number of seconds you want to sleep after the command, right now I can do DOSKEY SLEEP=PING 127.0.0.1 -n $1 > NUL Which works, but it's always 1 second less than your input, so if you wanted to sleep for one second you would have to use the command SLEEP 2. That's not exactly ideal. Is there some way for me to pass the arithmetic of $1+1 and pass it on to the next command in Windows? I assume there is some way of using STDOUT...

    Read the article

  • How to call Javascript fun() in JSF EL conditionally?

    - by Paul
    I have to call Javascript funtion based on the bean value. i use the following code onmouseover="#{occasionBean.user.userPreference.defaultPreview==true?'':'Tip()'})" I need to send some parameters in Tip() like this Tip('') Error i am getting is javax.servlet.jsp.JspException: javax.faces.el.EvaluationException: com.sun.faces.el.impl.parser.ParseException: Encountered "test" at line 1, column 60. Was expecting one of: "}" ... "." ... "" ... "gt" ... "<" ... "lt" ... "==" ... "eq" ... "<=" ... "le" ... "=" ... "ge" ... "!=" ... "ne" ... "[" ... "+" ... "-" ... "*" ... "/" ... "div" ... "%" ... "mod" ... "and" ... "&&" ... "or" ... "||" ... "?" ... '

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • Hide a single content block from search engines?

    - by jonas
    A header is automatically added on top of each content URL, but its not relevant for search and messing up the all the results beeing the first line of every page (in the code its the last line but visually its the first, which google is able to notice) Solution1: You could put the header (content to exculde from google searches) in an iframe with a static url domain.com/header.html and a <meta name="robots" content="noindex" /> ? - are there takeoffs of this solution? Solution2: You could deliver it conditionally by apache mod rewrite, php or javascript -takeoff(?): google does not like it? will google ever try pages with a standard users's useragent and compare? -takeoff: The hidden content will be missing in the google cache version as well... example: add-header.php: <?php $path = $_GET['path']; echo file_get_contents($_SERVER["DOCUMENT_ROOT"].$path); ?> apache virtual host config: RewriteCond %{HTTP_USER_AGENT} !.*spider.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yahoo.* [NC] RewriteCond %{HTTP_USER_AGENT} !Bing.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yandex.* [NC] RewriteCond %{HTTP_USER_AGENT} !Baidu.* [NC] RewriteCond %{HTTP_USER_AGENT} !.*bot.* [NC] RewriteCond %{SCRIPT_FILENAME} \.htm$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.html$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.php$ [NC] RewriteRule ^(.*)$ /var/www/add-header.php?path=%1 [L]

    Read the article

  • How to use VBA to colour pie chart

    - by Timon Heinomann
    I have the following code in which the code tries to create a bubble chart with pie charts as the bubbles. As in this version colour themes are used to create a different colour in each pie chart (bulbble) in the function part I have the problem that it works depending on the paths to the colour paletts. Is there an easy way to make the function in a way that it works independently of those paths either by coding a colour for each pie chart segment or by using standardize paths (probably not possible, not preferable). Sub PieMarkers() Dim chtMarker As Chart Dim chtMain As Chart Dim intPoint As Integer Dim rngRow As Range Dim lngPointIndex As Long Dim thmColor As Long Dim myTheme As String Application.ScreenUpdating = False Set chtMarker = ActiveSheet.ChartObjects("chtMarker").Chart Set chtMain = ActiveSheet.ChartObjects("chtMain").Chart Set chtMain = ActiveSheet.ChartObjects("chtMain").Chart Set rngRow = Range(ThisWorkbook.Names("PieChartValues").RefersTo) For Each rngRow In Range("PieChartValues").Rows chtMarker.SeriesCollection(1).Values = rngRow ThisWorkbook.Theme.ThemeColorScheme.Load GetColorScheme(thmColor) chtMarker.Parent.CopyPicture xlScreen, xlPicture lngPointIndex = lngPointIndex + 1 chtMain.SeriesCollection(1).Points(lngPointIndex).Paste thmColor = thmColor + 1 Next lngPointIndex = 0 Application.ScreenUpdating = True End Sub Function GetColorScheme(i As Long) As String Const thmColor1 As String = "C:\Program Files\Microsoft Office\Document Themes 15\Theme Colors\Blue Green.xml" Const thmColor2 As String = "C:\Program Files\Microsoft Office\Document Themes 15\Theme Colors\Orange Red.xml" Select Case i Mod 2 Case 0 GetColorScheme = thmColor1 Case 1 GetColorScheme = thmColor2 End Select End Function The code copies a single chart again and again on the bubbles. So I woudl like to alter the Function (now called Get colourscheme) into a fucntion that assigns a a unqiue rgb colour to each segment of each pie chart

    Read the article

  • RTNETLINK answers: File exists... maybe because assigned a new mac adress

    - by steven
    I got a "RTNETLINK answers: File exists Failed to bring up eth0:1" on "ifup eth0:1". I suspect it happens because i assigned a new mac adress in my VM's network adapter. Can you tell me how to fix the issue? My configuration looks like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.1.80 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1 # Alias being connected to 192.168.10.x Network auto eth0:1 allow-hotplug eth0:1 iface eth0:1 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 Why do I get "RTNETLINK answer: File exists.." suddenly? I worked with this configuration before without problems. All i did in the past is to renew the adapters mac adress. At the moment I am connected to the 192.168.10.x Network and if I do /etc/init.d/networking stop /etc/init.d/networking start then i got "RTNETLINK [...] falied to bring up eth0:1" but the strage thing is that i am able to connect to 192.168.10.83 via ssh from my host machine. But I cannot reach the internet from the debian client. I hope it is clear what my problem is, now. update if i change my /etc/network/interfaces like this then "ifup eth0" fails, too with the same error! # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 with verbose option enabled i got: Configuring interfache eth0=eth0 (inet) run-parts --verbose /etc/network/if-pre-up.d ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0 RTNETLINK answers: File exists Failed to bring up eth0. same if i type this manually: ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0

    Read the article

  • Checking if a console application is still running using the Process class

    - by Ced
    I'm making an application that will monitor the state of another process and restart it when it stops responding, exits, or throws an error. However, I'm having trouble to make it reliably check if the process (Being a C++ Console window) has stopped responding. My code looks like this: public void monitorserver() { while (true) { server.StartInfo = new ProcessStartInfo(textbox_srcdsexe.Text, startstring); server.Start(); log("server started"); log("Monitor started."); while (server.Responding) { if (server.HasExited) { log("server exitted, Restarting."); break; } log("server is running: " + server.Responding.ToString()); Thread.Sleep(1000); } log("Server stopped responding, terminating.."); try { server.Kill(); } catch (Exception) { } } } The application I'm monitoring is Valve's Source Dedicated Server, running Garry's Mod, and I'm over stressing the physics engine to simulate it stopping responding. However, this never triggers the process class recognizing it as 'stopped responding'. I know there are ways to directly query the source server using their own protocol, but i'd like to keep it simple and universal (So that i can maybe use it for different applications in the future). Any help appreciated

    Read the article

  • Mac dev folder missing, SSH not working

    - by SamGoody
    A few days ago, SSH stopped working. When I try logging in a get the following message: PTY allocation request failed on channel 0 stdin: is not a tty fatal: unrecognized command '' Connection to 74.52.61.194 closed. Web searches have shown me that there might be something wrong with /dev/std. But my computer lacks a /dev/ drive. There is an Alias to /dev/ [hidden, but I've revealed hidden files to do this search], but when I try to open it I am told that it cannot find the folder it is aliasing. Now, many a web search tells me that without a dev folder, the computer doesn't work, but it does seem to work, except the SSH. Also, are there any tools that can save my SSH preferences so that I don't have to, each time, type out the username@adrees, password, path all of which are long and complex? Not looking for a Filezilla type client, there are many of those. Looking for a command line like putty, that lets me use bash on the remote client. Am on Macbook Pro, latest version of Tiger.

    Read the article

  • Java applet loading images from external jars

    - by Mathias
    I have a jar on a server, and users should be able to develop extensions for it. Therefore the jars main class should be extended and some resources should be added to a second user created jar which will be loaded from another server or locally. Now I have problems accessing the resources (images) from the user loaded jars. Heres is the structure: My Server: game.jar containing game.class images.class ... image1.png (...) Local: user.jar containing: user.class extends game userimage.png The extension is loaded via Greasemonkey, it modifies the "archive" attribute to "/home/username/user.jar, game.jar" and the "code" attribute to "user.class". The user should be able to overwrite already defined images. If the image does not exist in game.jar, it is loaded correctly from user.jar. But the images loaded early in the game are always loaded from the game.jar, others seem to be overwritten correctly by the user. Is there a way to make sure they are always loaded in the correct order? This might be because of some caching mechanism. Because Greasemonkey removes the game from the page, changes the archive and code and reinsert it, the game is loaded without a mod for a brief second. In that time, images are loaded as expected from game jar, but those are the ones not being overwritable by the user. But how to avoid it? Another thing: If I overwrite the "run" method in user.class, the game is unable to load any image at all. Not from the user.jar and not from the game.jar. Java doesn't find the image, as the URL object "getClass().getResource(imagename)" returns with null. I tried to overwrite the image.class, but that doesn't fix the problem, unless I overwrite every class from game.class involved into calling the image.class

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by Pavel
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). Apache "Keep Alive" option is set to 0

    Read the article

  • Apache + Passenger not passing on custom status codes

    - by harm
    I'm currently building an API. This API communicates with the client via status codes. I created several custom status codes (as per http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html#sec6) in order to inform the client on certain things. For example I introduced the 481 status code to signify a specific client error. The Rails app I wrote works like a charm. But when Apache and Passenger are serving it things run aground. When I provoke a 481 error the response header looks like this: HTTP/1.1 500 Internal Server Error Date: Wed, 19 May 2010 06:37:05 GMT Server: Apache/2.2.9 (Debian) Phusion_Passenger/2.2.5 mod_ssl/2.2.9 OpenSSL/0.9.8g X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.5 Cache-Control: no-cache X-Runtime: 1938 Set-Cookie: _session_id=32bc259dc763193ad57ae7dc19d5f57e; path=/; HttpOnly Content-Length: 62 Status: 481 Content-Type: application/json; charset=utf-8 As you can see the original Status header is still there almost a the end. But the 'true' status header (the very first line) is quiet different. It seems that Apache doesn't like Status headers it has no knowledge of and thus assumes an error. Is there anyway to fix this? Maybe via the mod_headers ( http://httpd.apache.org/docs/2.2/mod/mod_headers.html) module? I don't know enough of Apache to figure this out on my own. Thanks,

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • File permission woes on an Ubuntu ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu sudo:x:27:me www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo. So I chown -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com Edit : It appears I had some alias messing with my ls command. By calling \ls -l I can see that all my files are in the correct group.

    Read the article

  • How to change value inside a JSON string.

    - by Jeremy Roy
    I have a JSON string array of objects like this. [{"id":"4","rank":"adm","title":"title 1"}, {"id":"2","rank":"mod","title":"title 2"}, {"id":"5","rank":"das","title":"title 3"}, {"id":"1","rank":"usr","title":"title 4"}, {"id":"3","rank":"ref","title":"title 5"}] I want to change the title value of it, once the id is matching. So if my variable myID is 5, I want to change the title "title 5" to new title, and so on. And then I get the new JSON array to $("#rangArray").val(jsonStr); Something like $.each(jsonStr, function(k,v) { if (v==myID) { this.title='new title'; $("#myTextArea").val(jsonStr); } }); Here is the full code. $('img.delete').click(function() { var deltid = $(this).attr("id").split('_'); var newID = deltid[1]; var jsonStr = JSON.stringify(myArray); $.each(jsonStr, function(k,v) { if (v==newID) { // how to change the title jsonStr[k].title = 'new title'; alert(jsonStr); $("#rangArray").val(jsonStr); } }); }); The above is not working. Any help please?

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

  • Postfix auto create Maildir

    - by Eugene
    I've been beating my head against a wall for a while now on this one. Basically, here is the rundown: Our MX record points to a frontend SMTP server, which contains aliases for actually routing the mail. No alias, no access to the backend storage server, which is what our clients connect to. I'm upgrading the backend email server. Currently, a user is created for every email user on the server, which creates the mailbox. On the new server, everything autheticates through PAM to an LDAP server (all of which is working properly). My goal is to get Postfix to create the Maildir directory for the user automatically. This works fine when I have the /home directory with 777 permissions, but for obvious reasons, this should be avoided. I would like to do this with 775 permissions on /home with a group owner of whatever user Postfix is running as, but I can't seem to figure out what user to use. With the 777 permissions, the /home/$user/Maildir directory is created on message delivery. Does anybody know how I can do this without 777 permissions? The system I am working on is a 64-bit Debian Lenny 5.07 install. Any advice would be appreciated.

    Read the article

  • php request variables assigning $_GEt

    - by chris
    if you take a look at a previous question http://stackoverflow.com/questions/2690742/mod-rewrite-title-slugs-and-htaccess I am using the solution that Col. Shrapnel proposed- but when i assign values to $_GET in the actual file and not from a request the code doesnt work. It defaults away from the file as if the $_GET variables are not set The code I have come up with is- if(!empty($_GET['cat'])){ $_GET['target'] = "category"; if(isset($_GET['page'])){ $_GET['pageID'] = $_GET['page']; } $URL_query = "SELECT category_id FROM cats WHERE slug = '".$_GET['cat']."';"; $URL_result = mysql_query($URL_query); $URL_array = mysql_fetch_array($URL_result); $_GET['category_id'] = $URL_array['category_id']; }elseif($_GET['product']){ $_GET['target'] = "product"; $URL_query = "SELECT product_id FROM products WHERE slug = '".$_GET['product']."';"; $URL_result = mysql_query($URL_query); $URL_array = mysql_fetch_array($URL_result); print_r($URL_array); $_GET['product_id'] = $URL_array['product_id']; The original variable string that im trying to represent is /cart.php?Target=product&product_id=16142&category_id=249 And i'm trying to build the query string variables with code and including cart.php so i can use cleaner URL's So I have product/product-title-with-clean-url/ going to slug.php?product=slug Then the slug searches the db for a record with the matching slug and returns the product_id as in the code above.Then built the query string and include cart.php

    Read the article

  • Viewing Postscript (or PDF) on OS X: Aliasing issues

    - by mankoff
    I am generating postscript graphics and am trying to find a balance between non-aliasing and over-aliasing. If I use the raw ghostscript viewer gs on the Postscript, it looks good. The text appears anti-aliased, but the image remains nice and blocky. Unfortunately, gs has no real user interface and loses all of the nice things that Preview.app has. I could install gv, but the dependency bloat is huge! It requires all of gnome. And even that isn't a great viewer compared to Preview.app or Skim.app. Here is an image viewed with gs: From a user-interaction and Mac-ish perspective, Preview.app (or Skim.app is a much nicer program to use. They have the option to turn on or off aliasing, but neither option looks very good. Which aliasing on, the image is blurry. When it is off, the graphic matches what is seen from gs, but there are two issues. Minor issue: the font is ugly. Uglier than with gs. Major issue: Every PDF is un-aliased, making it hard to read regular PDFs full of text. So, in summary: Is there a way to manually generate the PDF from the PS that overcomes these issues? Is there a way to find a middle ground of alias/unalias with Preview.app? Is there another app that displays with quality like gs, but has a decent UI like Skim.app or Preview.app Is there a way to have Preview.app turn off aliasing for only one file (containing graphics) but leave it enabled in general so that text PDFs are still readable?

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • UIDs for service users in Mac OS X

    - by LaC
    Some third-party servers should be run under a special user for security reasons (eg, PostgreSQL is typically run by "postgres"). Of course, these service users should not show up in the Mac OS X login windows. I know how to create hidden users using dscl or dsimport, but I'm wondering what the best policy is for assigning UIDs (and matching GIDs). Apple's documentation states that UIDs from 0 to 100 are reserved (pg. 69), but OS X comes with several special users and groups outside that range. I used to use ids from 401 onwards for services, but I noticed that OS X 10.6 has started using that range for groups created by the Sharing pane in System Preferences. What is the recommended ID range to use for third-party services, then? Perhaps I should just use IDs in the 500 range, since all that is needed to hide a user in Snow Leopard is setting his password to "*"? Also, most of Apple's services have names starting with an underscore, with an alias sans underscore; eg, _sandbox and sandbox. Is there any special significance to this? Should I do the same for my services?

    Read the article

  • Problem creating ODBC connection to SQL Server 2008 with Vista

    - by earlz
    Well, I'm trying to get a database schema thing working, first I tried just doing it in Linux where I'm more comfortable, but ODBC seems to be a hack there and I couldn't get it to work. So I figured it shouldn't be too hard in Windows.. Ok, so I created a SQL Server Client Alias so that I can simply same windowsserver to refer to my SQL server. Then, I went to the ODBC configuration in Control Panel. I clicked Add in the User DSN section. I chose Native SQL Server (10), and then clicked next. Then I typed a short name and a description and gave the servername as windowsserver/SQLEXPRESS Then, I click next, give it my user name and password and click next. Then, after like 2 minutes it says "Login Timeout Expired" What can be wrong here? I know the server is configured cause I have SQL Server Management Studio opened up with that server in it. I'm also just trying to connect over regular TCP/IP and my firewall is disabled.

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Smarty - foreach loop 4 times and create a new list

    - by Harold
    I want to achieve a list like this with smarty. <ul> <li> <a>img1</a> <a>img2</a> <a>img3</a> <a>img4</a> </li> <li> <a>img5</a> <a>img6</a> <a>img7</a> <a>img8</a> </li> <li> <a>img9</a> <a>img10</a> <a>img11</a> <a>img12</a> </li> </ul> Using this sample code <ul class="bullet"> {foreach from=$manufacturers item=manufacturer name=manufacturer_list} {if $smarty.foreach.manufacturer_list.index < 4} <li class="{if $smarty.foreach.manufacturer_list.last}last_item{elseif $smarty.foreach.manufacturer_list.first}first_item{else}item{/if}"> <a href="{$link->getmanufacturerLink($manufacturer.id_manufacturer, $manufacturer.link_rewrite)}" title="{l s='More about' mod='blockmanufacturer'}{$manufacturer.name}"> <img src="{$img_manu_dir}{$manufacturer.id_manufacturer}.jpg"><span>{$manufacturer.name}<span></a> </li> {/if} {/foreach} First with given array $manufacturers it will loop inside a <li> max of 4 times and will create 4 <img>. And then, when it reach the 4th index, it will make a new <li> tag. Thanks for the help!

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • Core jQuery event modification problem

    - by DSKVR
    I am attempting to overwrite a core jQuery event, in this case the keydown event. My intention is to preventDefault() functionality of Left(37), Up(38), Right(39) and Down(40) to maintain the consistency of hot keys in my web application. I am using the solution provided here for the conditional charCode preventDefault problem. For some reason, my function overwrite is simply not firing, and I cannot put my finger on the problem. I am afraid that over the past 30 minutes this issue has resulted in some hair loss. Anybody have the remedy? /* Modify Keydown Event to prevent default PageDown and PageUp functionality */ (function(){ var charCodes = new Array(37,38,39,40); var original = jQuery.fn.keydown; jQuery.fn.keydown = function(e){ var key=e.charCode ? e.charCode : e.keyCode ? e.keyCode : 0; alert('now why is my keydown mod not firing?'); if($.inArray(key,charCodes)) { alert('is one of them, do not scroll page'); e.preventDefault(); return false; } original.apply( this, arguments ); } })();

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >