Search Results

Search found 30912 results on 1237 pages for 'load path'.

Page 441/1237 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • With a username passed to a script, find the user's home directory

    - by Clinton Blackmore
    I am writing a script that gets called when a user logs in and check if a certain folder exists or is a broken symlink. (This is on a Mac OS X system, but the question is purely bash). It is not elegant, and it is not working, but right now it looks like this: #!/bin/bash # Often users have a messed up cache folder -- one that was redirected # but now is just a broken symlink. This script checks to see if # the cache folder is all right, and if not, deletes it # so that the system can recreate it. USERNAME=$3 if [ "$USERNAME" == "" ] ; then echo "This script must be run at login!" >&2 exit 1 fi DIR="~$USERNAME/Library/Caches" cd $DIR || rm $DIR && echo "Removed misdirected Cache folder" && exit 0 echo "Cache folder was fine." The crux of the problem is that the tilde expansion is not working as I'd like. Let us say that I have a user named george, and that his home folder is /a/path/to/georges_home. If, at a shell, I type: cd ~george it takes me to the appropriate directory. If I type: HOME_DIR=~george echo $HOME_DIR It gives me: /a/path/to/georges_home However, if I try to use a variable, it does not work: USERNAME="george" cd ~$USERNAME -bash: cd: ~george: No such file or directory I've tried using quotes and backticks, but can't figure out how to make it expand properly. How do I make this work?

    Read the article

  • Jira access with AJP-Proxy

    - by user60869
    I want to Configure the Jira-Acces over APJ-Proxy. I proceeded as follows (Following this howto: http://confluence.atlassian.com/display/JIRA/Configuring+Apache+Reverse+Proxy+Using+the+AJP+Protocol) : 1) In the server.xml I activate the AJP: 2) Edit VHOST Konfiguration: # Load Proxy-Modules LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so # Load AJP-Modules LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so # Proxy Configuration <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On # Basic AuthType configuration <Proxy *> AuthType Basic AuthName Bamboo-Server AuthUserFile /var/www/userdb Require valid-user AddDefaultCharset off Order deny,allow Deny from all Allow from 192.168.0.1 satisfy any </Proxy> ProxyPass /bamboo http://localhost:8085/bamboo ProxyPassReverse /bamboo http://localhost:8085/bamboo ProxyPass /jira ajp://localhost:8009/ ProxyPassReverse /jira ajp://localhost:8009/ </IfModule> EDIT: In the logs if found follow: //localhost:8080/ [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1819): proxy: worker ajp://localhost:8080/ already initialized [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1913): proxy: initialized single connection worker 1 in child 5578 for (localhost) [Fri Nov 19 14:51:32 2010] [error] ajp_read_header: ajp_ilink_receive failed [Fri Nov 19 14:51:32 2010] [error] (120006)APR does not understand this error code: proxy: read response failed from (null) (localhost) [Fri Nov 19 14:51:32 2010] [debug] proxy_util.c(2008): proxy: AJP: has released connection for (localhost) [Fri Nov 19 14:51:32 2010] [debug] mod_deflate.c(615): [client xx.xx.xx.xx Zlib: Compressed 468 to 320 : URL /jira But It dosen´t work. Somebody have an idea?

    Read the article

  • Cannot connect to server via SSH

    - by Rayne
    I'm running RHEL 6.0, and I accidentally moved the /bin, /boot, /cgroup, console.txt, /data, /dev, /etc to another folder. I think I managed to move these folders back, but now I'm having trouble connecting to the server using SSH, but am able to access the server via VNC. When I tried to connect to the server using a terminal from another server, I get the error ssh_exchange_identification: Connection closed by remote host I'm currently still connected via SSH to the server (haven't closed the window yet), and am still able to access it normally. But if I try to open a new SSH terminal from my current session, I see /bin/bash: Permission denied If I try to open a new SSH File Transfer window from my current session, I get the error File transfer server could not be started or it exited unexpectedly. Exit value 0 was returned. Most likely the sftp-server is not in the path of the user on the server-side I checked and I have Subsystem sftp /usr/libexec/openssh/sftp-server which is the same path as the output of locate sftp-server Also, when I tried to restart sshd, I get the error Couldn't open /dev/null: Permission denied But my /dev/null has the permissions crw-rw-rw- for root,root. How can I resolve this? ETA: Thanks for all your help! I was able to start ssh by running the application directly /usr/sbin/sshd Even though the status of the openssh-daemon is still "stopped".

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • Change the default program for a filetype to something not in "Program Files" in Windows Vista

    - by Carson Myers
    I'm trying to make my python scripts run in python 2.6 by default when run from the command line. This is paired with adding certain scripts to the PATH variable and .py to PATHEXT for convenience. But I'll be damned if I can get the file type association to work. In the default programs dialog (found in control panel) I find .py, and click "Change Program." This gives me the same dialog as clicking "Open with..." on a file's context menu. I search for python. Tell it to use python, but it doesn't add it to the list of programs I am allowed to use. I tried making a shortcut to python in Program Files, but that won't work either. If I copy python into a folder in Program Files, then that works. But why can't I just point it at C:\python26\python.exe (which is in the PATH variable) in the first place? Is there a way around this, or do I have to just reinstall python into Program Files?

    Read the article

  • Nginx with PAM authentication through pam_script

    - by Envek
    Have anyone set up such a configuration? It's not work for me. So, I've installed nginx-extras on Ubuntu 12.04 (it's built with PAM module), and write to site config: location ^~ /restricted_place/ { auth_pam "Please specify login and password from main_site"; auth_pam_service_name "nginx"; } Afterwards, in /etc/pam.d/nginx: auth required pam_script.so dir=/path/to/my/auth_scripts And wrote simplest /path/to/my/auth_scripts/pam_script_auth (also I've tried to write complicated scripts) #!/bin/sh exit 0 # should allow anyone Doesn't work. The script is launched (I've wrote full functional script, that successfully executes, check credentials, writes to its own log and returns correct exit code, and executes noticeably long). But no access granted. Only rejected. In /var/log/nginx/error.log appears next record: 2012/09/13 10:44:42 [alert] 1666#0: waitpid() failed (10: No child processes) If I'm specify in /etc/pam.d/nginx: auth required pam_unix.so and grant for www-data user right to read /etc/shadow, unix authorization works fine. But script auth doesn't work. Can't understand, where is trouble. In nginx module, or in pam_script module.

    Read the article

  • Lock down Wiki access to password only but remain open to a subnet via .htaccess

    - by Treffynnon
    Basically we have a Wiki that has some sensitive information stored in it - not the best I know but my predecessor set it up. I want to be able to request password access from any one who is not on the local network subnet. Those on the local subnet should be able to proceed without entering a password. The following .htaccess does not seem to work any more as it is letting non-local access without requiring the password: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any order deny,allow And I cannot work out why. The WikkaWiki it is supposed to be protecting was recently upgraded, which clobbered the .htaccess file so I restored the above from memory/googling. Maybe I am missing an important directive? The full .htaccess is as follows: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any SetEnvIfNoCase Referer ".*($LIST_OF_ADULT_WORDS).*" BadReferrer order deny,allow deny from env=BadReferrer <IfModule mod_rewrite.c> # turn on rewrite engine RewriteEngine on RewriteBase / # if request is a directory, make sure it ends with a slash RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*/[^/]+)$ $1/ # if not rewritten before, AND requested file is wikka.php # turn request into a query for a default (unspecified) page RewriteCond %{QUERY_STRING} !wakka= RewriteCond %{REQUEST_FILENAME} wikka.php RewriteRule ^(.*)$ wikka.php?wakka= [QSA,L] # if not rewritten before, AND requested file is a page name # turn request into a query for that page name for wikka.php RewriteCond %{QUERY_STRING} !wakka= RewriteRule ^(.*)$ wikka.php?wakka=$1 [QSA,L] </IfModule>

    Read the article

  • Migrating Windows XP BOOT.INI Settings to Windows 7 Boot-loader

    - by Synetech inc.
    Hi, Two months ago my motherboard died, so I bought a used computer that came with Windows 7. I have since installed my old hard-drive, which had Windows XP on it, in this system. What I am trying to do now is to figure out a way to migrate the settings from XP's BOOT.INI into 7's boot-loader. Below is the BOOT.INI I used in XP (I have reduced the strings and updated the disks to point to the new location of the old HD. Oh and I am not clear on the drive letters. In XP, I could boot the recovery console or MS-DOS from a file in C:\ that contains the boot-sector. I am not sure what drive letter it would be called now—I had to manually change all the drive letters of the old partitions in Windows 7 because it auto-assigned them all wrong/differently). [boot loader] timeout=10 default=multi(0)disk(0)rdisk(1)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP (Safe)" /safeboot:network /sos /bootlog /noguiboot C:\CMDCONS\BOOTSECT.DAT="Recovery Console" /cmdcons C:\BOOTSECT.DOS="MS-DOS 7.10" /win95 I have looked around, and have only been able to find some bcdedit commands to add XP to the boot-loader, but none that include information on setting safe-mode for it (or changing any of the XP load options for that matter). Not surprisingly I suppose, I have not found anything on adding the XP recovery console or DOS to the Windows 7 boot-loader. (Yes, I tried EasyBCD, but that did not help; it had no options for XP, and the best I managed was to get a choice of booting 7 or normal-mode XP—choosing XP didn't even give the old XP boot menu.) Can anyone please tell me how to export the entries in XP's boot.ini to 7's boot-loader so that on boot, I can choose to load the following: Windows 7 Windows 7 (Safe-mode) (Windows 7 (The Win7 counterpart of the Recovery Console)) Windows XP Windows XP (Safe-mode) Windows XP (Recovery Console) MS-DOS 7.10

    Read the article

  • route propogation using OSPF in a network

    - by liv2hak
    I am using Juniper J-series routers to emulate a small telco and VPN customer.The internal routing will be configured with OSPF,MPLS including a default and backup path,RSVP for distributing labels withing the telco,OSPF for distributing routes from the customer edge (CE) routers to the VRF's in the adjacent PE's and finally iBGP for distributing customer routes between VRF's in different PEs. The topology of the network is shown below. The Addressing scheme for the network is as follows. UOW-TAU ******* ge-0/0/0 192.168.3.1 TAU-PE1 ******* ge-0/0/0 10.0.1.0 ge-0/0/1 10.0.2.0 ge-0/0/2 192.168.3.2 TAU-P1 ****** ge-0/0/0 172.16.1.0 ge-0/0/1 172.16.3.1 ge-0/0/2 10.0.2.2 HAM-P1 ****** ge-0/0/0 172.16.3.2 ge-0/0/1 172.16.2.1 ge-0/0/3 10.0.3.2 ACK-P1 ****** ge-0/0/0 172.16.1.2 ge-0/0/2 172.16.2.2 ge-0/0/3 10.0.1.2 HAM-PE1 ******* ge-0/0/0 10.0.3.1 ge-0/0/2 192.168.4.2 UOW-HAM ******* ge-0/0/0 192.168.4.1 I also set up loopback address for each node. I want to setup OSPF so that path to each internal subnet and router loopback address is propogated to all PE and P nodes.I also want to select a single area for PE and P nodes,and on each node I should add each interface that should be propogated. How do I accomplish this.? With my understanding below is the procedure to achieve this.Is the below explanation correct? I set up OSPF on UOW-TAU ge-0/0/0 interface and ge-0/0/1 interface and UOW-HAM ge-0/0/0 interface and ge-0/0/1 interface. let me call this Area 100. Once I have done this I should be able to reach each node from others using ping and traceroute. Any help is highly appreciated.

    Read the article

  • chrooting php-fpm with nginx

    - by dragonmantank
    I'm setting up a new server with PHP 5.3.9 and nginx, so I compiled PHP with the php-fpm SAPI options. By itself it works great using the following server entry in nginx: server { listen 80; server_name domain.com www.domain.com; root /var/www/clients/domain.com/www/public; index index.php; log_format gzip '$remote_addr - $remote_user [$time_local] "$request" $status $bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"'; access_log /var/www/clients/domain.com/logs/www-access.log; error_log /var/www/clients/domain.com/logs/www-error.log error; location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/clients/domain.com/www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } } It servers my PHP files just fine. For added security I wanted to chroot my FPM instance, so I added the following lines to my conf file for this FPM instance: # FPM config chroot = /var/www/clients/domain.com and changed the nginx config: #nginx config for chroot location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } With those changes, nginx gives me a File not found message for any PHP scripts. Looking in the error log I can see that it's prepending the root path to my DOCUMENT_ROOT variable that's passed to fastcgi, so I tried to override it in the location block like this: fastcgi_param DOCUMENT_ROOT /www/public/; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; but I still get the same error, and the debug log shows the full, unchrooted path being sent to PHP-FPM. What am I missing to get this to work?

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • using pf for packet filtering and ipfw's dummynet for bandwidth limiting at the same time

    - by krdx
    I would like to ask if it's fine to use pf for all packet filtering (including using altq for traffic shaping) and ipfw's dummynet for bandwidth limiting certain IPs or subnets at the same time. I am using FreeBSD 10 and I couldn't find a definitive answer to this. Googling returns such results as: It works It doesn't work Might work but it's not stable and not recommended It can work as long as you load the kernel modules in the right order It used to work but with recent FreeBSD versions it doesn't You can make it work provided you use a patch from pfsense Then there's a mention that this patch might had been merged back to FreeBSD, but I can't find it. One certain thing is that pfsense uses both firewalls simultaneously so the question is, is it possible with stock FreeBSD 10 (and where to obtain the patch if it's still necessary). For reference here's a sample of what I have for now and how I load things /etc/rc.conf ifconfig_vtnet0="inet 80.224.45.100 netmask 255.255.255.0 -rxcsum -txcsum" ifconfig_vtnet1="inet 10.20.20.1 netmask 255.255.255.0 -rxcsum -txcsum" defaultrouter="80.224.45.1" gateway_enable="YES" firewall_enable="YES" firewall_script="/etc/ipfw.rules" pf_enable="YES" pf_rules="/etc/pf.conf" /etc/pf.conf WAN1="vtnet0" LAN1="vtnet1" set skip on lo0 set block-policy return scrub on $WAN1 all fragment reassemble scrub on $LAN1 all fragment reassemble altq on $WAN1 hfsc bandwidth 30Mb queue { q_ssh, q_default } queue q_ssh bandwidth 10% priority 2 hfsc (upperlimit 99%) queue q_default bandwidth 90% priority 1 hfsc (default upperlimit 99%) nat on $WAN1 from $LAN1:network to any -> ($WAN1) block in all block out all antispoof quick for $WAN1 antispoof quick for $LAN1 pass in on $WAN1 inet proto icmp from any to $WAN1 keep state pass in on $WAN1 proto tcp from any to $WAN1 port www pass in on $WAN1 proto tcp from any to $WAN1 port ssh pass out quick on $WAN1 proto tcp from $WAN1 to any port ssh queue q_ssh keep state pass out on $WAN1 keep state pass in on $LAN1 from $LAN1:network to any keep state /etc/ipfw.rules ipfw -q -f flush ipfw -q add 65534 allow all from any to any ipfw -q pipe 1 config bw 2048KBit/s ipfw -q pipe 2 config bw 2048KBit/s ipfw -q add pipe 1 ip from any to 10.20.20.4 via vtnet1 out ipfw -q add pipe 2 ip from 10.20.20.4 to any via vtnet1 in

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • Apache 2.4 Prefork vs. PHP-FPM Event shows sig decrease in requests per second

    - by Mark
    On my Apache 2.4.2 server with a standard mod_php Prefork setup these are my server-status results Current Time: Wednesday, 24-Oct-2012 19:36:24 CDT Restart Time: Wednesday, 24-Oct-2012 01:27:30 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 18 hours 8 minutes 54 seconds Total accesses: 14304233 - Total Traffic: 342.3 GB CPU Usage: u12584.6 s721.93 cu.66 cs3.43 - 20.4% CPU load 219 requests/sec - 5.4 MB/second - 25.1 kB/request 507 requests currently being processed, 355 idle workers ______KKKKR_K______W_KKC___CKK_K_K_W__CC_KKK_KK._K_K_KK._KKKK_K_ K_____KK_KKKK_K_KK__K___KK_K___K_____CKKK_WK_K_____KCKK__K___K_K K_CK_K_K_____K__KKKK_K__K___K_KK_K_K_KKKCK____________KK_CK__KKK __C_KKKKKKK___CK___C_KKK_K__C__K_CK____KKK__K__K__K_K__KK_CK_K__ _KKKKK_K_W__KK______K___K__W___C_K__K____KKKKKKKK.KKKKKKKCK_K___ _C_KK_K_WK__K_KK__K__RK_KK___K____K_KK_K_K___RKC_KKKK___KKKC_K_W _C_KK_KK__W____KC__KKK__KKK___K___KKK_KK_K_KKW__K_KR_KK_KK__KKK_ R__KKK__KKKKKK__K_KKKKK_K__K_K___KKW_________KK_K___KKK___KK.K_C KKKKKKW_____K__K_KKC_KCKK_K_KK_K__KK__K___K__KK_KK__________KK__ __K___KK_K__K_C_KK_K___KK__KK__K__KCK_K__KK_________K_K_KK__.K__ K_CKK.CCRW__KKKKKKKKKKKC__W____K___KWK_KK_KKC______.K_K_KK_KKKC_ __KKK_W_KCKKK_K_K____CCCK__KC_KKKK_K____K_CK_K____K__K____KKK_KK KK___K_K_K__KW__KCKKKK____WKWK__K_KKRKK__C_K_KK_KK_K__KKCC_K__C_ KK_K___K_KK______K_____CKK_K_______KK_CKCK__KKKKK____K__K..K____ __KKWK_KW__KKK__K_KKK___K_KK_KKK__KK___KK___KK_KK___KK____KKWKKC KK_KKKK_................................` When I switch to a PHP-FPM setup with the Event MPM with no other variables changes, my requests/sec plummet and overall apache response is garbage. Current Time: Wednesday, 24-Oct-2012 19:51:21 CDT Restart Time: Wednesday, 24-Oct-2012 19:48:03 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 3 minutes 18 seconds Total accesses: 18720 - Total Traffic: 307.1 MB CPU Usage: u16.57 s4.74 cu0 cs0 - 10.8% CPU load 94.5 requests/sec - 1.6 MB/second - 16.8 kB/request 15 requests currently being processed, 49 idle workers PID Connections Threads Async connections total accepting busy idle writing keep-alive closing 11701 114 no 10 22 0 66 38 11702 134 no 5 27 0 81 48 Sum 248 15 49 0 147 86 __R_R__W___RRW________RR__R___W_W_______W_____W_____________R_R_ Is there any obvious reason anyone could think of why this would be the case. I can provide any other additional stats or server setup info to help out. Ive tried tweaking everything up and down and nothing really helps get the PHP-FPM setup anywhere near a baseic prefork/mod-php setup. Thanks!

    Read the article

  • what is uninstall procedure for software installed via "make install" on CentOS 6.2

    - by gkdsp
    I installed OCILIB on my CentOS 6.2 server some time ago, and now I want to install a newer version. The vendor requires an uninstall, but doesn't provide instructions. I'm guessing that's because it's trivial for people with a Linux background. http://orclib.sourceforge.net/doc/html/group_g_install.html If I installed this software using: step 1: # ./configure --with-oracle-headers-path=/usr/include/oracle/11.2/client64 --with-oracle-lib-path=/usr/lib/oracle/11.2/client64/lib step 2: # make step 3: # su root step 4: # make install step 5: # gcc -g -DOCI_IMPORT_LINKAGE -DOCI_CHARSET_ANSI -L/usr/lib/oracle/11.2/client64/lib -lclntsh -L/usr/local/lib -locilib conn.c -o conn How would I go about uninstalling this? I tried following this http://www.cyberciti.biz/faq/delete-uninstall-software-linux-commands/ but nothing was found on my disk using rpm -qa *oci* or yum list *oci*. Maybe since it wasn't installed with yum or rpm then I shouldn't expect either of these to find it. Are there generic instructions for uninstalling software on Linux that I could use, or do the instructions really depend on the specific software? Any help much appreciated.

    Read the article

  • Apple Service Diagnostic application on USB key?

    - by Matt 'Trouble' Esse
    I found the following in a text file, and I would like to use the Apple Service Diagnostic Application from a bootable USB key but I cannot find where to download it or set it up? Also is this free software or does it require a separate licence? It sounds like it would be a useful tool for diagnosing Mac problems. The Apple Service Diagnostic application is designed to run both EFI and Mac OS X tests from an external USB hard drive. Apple Service Diagnostic (EFI) runs low-level tests of the hardware directly and does not require Mac OS X, while Apple Service Diagnostic (OS) uses Mac OS X to run tests. Booting and Using the Apple Service Diagnostic Application - Before using Apple Service Diagnostic, disconnect any Ethernet network, USB, and audio cables. - With the USB hard drive containing ASD 3S123 plugged into a USB port, restart the computer and hold down the option key as the computer boots up into the Startup Manager. To run ASD (EFI) select the "ASD EFI 3S123" drive icon and press return or select it with a mouse click. To run ASD (OS) select the "ASD OS 3S123" drive icon and press return or select it with a mouse click. ASD (EFI) will load in 20-30 seconds; ASD (OS) will load in 2-3 minutes. - After running ASD (OS) or ASD (EFI), press the Restart button to restart the computer back into the normal startup volume, or hold down the option key to get back to the Startup Manager. ASD is no longer delivered as an image to be restored onto a DVD. ASD 3S117 and newer versions requires installation onto an external USB hard drive. For more information, please refer to the document "Installing ASD on a USB hard drive".

    Read the article

  • How to keep Ubuntu 11.10 and Kate editor w/terminal from changing command line when changing tabs?

    - by Kairan
    I am programming C using Kate editor in Ubuntu 11.10. It works great, but when I change tabs in Kate, the terminal line changes to the file path of the tab I click on. Normally this is not a big deal (other than annoyingly adding extra text to my terminal) however if I am currently RUNNNING a C program, it obviously will type at the command line, which is not so cool. Example terminal window for my C program (its at a menu): 1) select opt 1 2) select opt 2 Enter choice: (here it waits for prompt from user) Now when I click a tab in Kate, it wants to put in the cd / path of the file in that tab, such as: cd /home/user/os/files And of course since my terminal was waiting for prompt from user it gets that command.. not good. Perhaps there is no fix, but maybe someone knows? Obviously I could choose NOT to switch tabs or end program before switching tabs... Note: I probably made the mistake of putting this under StackOverflow which is more of a programming area - so though repost here might be best (I am not sure how to link the questions but will paste hyperlink to that post - I dont want to violate any stackoverflow/superuser violations) Suggestions on merging them are welcome or if I should delete one? StackOverFlow Question

    Read the article

  • Using bind (named) as a public proxy server

    - by TrentDavis
    We have a Python DNS server that does a bunch of stuff to figure out values it should return for various DNS records. This works nicely, however as it is Python, the performance under high load won't be great. What I would like to do is have a "proxy" bind server sit in front of it to return results to the public internet. This will cache the results (typically 15 minutes, some records are a few seconds), so the load on the Python server will be greatly reduced as it will only see one look up per domain (only about 100 domains) every 15 minutes. The data in these domains changes a lot, so using a master won't work as it will constantly be changing. I have something setup that looked like it would work great (using a forwarder for the zone), and tested it with dig etc, all going great. However when we went to go live with it, things weren't working, and we figured out that named is not setting the "Authoritative" bit (fair enough, it is a forwarder). So my question is, can we tell bind to set the Authoritative bit for forwarded domains? I have looked at all the doco I can find, and can't find anything about doing things this way. Most of the doco about using it as a proxy if for a LAN to the internet. Ideally I would like to use bind as it is there and installed (CentOS 5 servers). But at a pinch we could look at a different name server to do the work if it just can't be done with bind. Thanks.

    Read the article

  • Samba Share - MS Excel when saved (can't access the file, there are several possible reasons)

    - by brain90
    Dear Fellow ServerFaulter, I have a weird problem in my samba share. I have one share definition for 3 client (A,B,C) This share contain some excel file which having a lot of formula and linked each other. Client A access the file with libre office (ubuntu), client B access with WinXP & MS Office 2003, The write and read process working successfuly on Both of them. The problem occur when client C accessing the same file with MS Excel 2003 (windows xp). This messagebox appear when he saving the file : Microsoft office excel cannot access the \\192.168.1.23\myshare\ There are several possible reasons: - The File ort path does not exist The file is being used by another program. - The workbook you are trying to save has the same name as a - Currently open workbooks. I was trying http://support.microsoft.com/kb/291204 but it didnt work. Below is my share definition : [brainshare] comment = brainshare path = /opt/brainshare/ valid users = @brainshare force group = brainshare read only = No create mask = 0775 veto files = /*.scr/*.eml/thumbs.com/ Help me please... Thanks in advance ! Server: Ubuntu 10.10, Samba version 3.5.4

    Read the article

  • Google Chrome doesn't respond user actions correctly

    - by Carlos A. Junior
    Recently I've changed my OS to Ubuntu 12.04 (Cinnamon, 64 bits) from Mint 13 (KDE, 64 bits) and one same bug still appears on new installation. The Google Chrome it seems that don't refresh (repaint) page based on my interactions. Example: When i'm try comment an Youtube vídeo, when i click on textarea, o cursor don't appear inside of textarea, BUT, when/if i change to another tab and return the cursor appears...OK... If i start write some text...according i'm typing the chars don't appers...again if i change to another tab and return the typed text appears on textarea. Other cases that this bug appears: Modal boxes link...don't show the modal; Forms inside modal boxes don't show typed chars; The common Discus comment plugin don't work when focused; I don't have any idea of reason of this bug. (video driver, window manager, Chrome bug ?, i don't know) Any idea to solve this ? Additional informations: Google Chrome 22.0.1229.79 (Official Build 158531) OS Linux WebKit 537.4 (@129177) JavaScript V8 3.12.19.11 Flash 11.3.31.331 User Agent Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Command Line /opt/google/chrome/google-chrome --flag-switches-begin --flag-switches-end Executable Path /opt/google/chrome/google-chrome Profile Path /home/carlos/.config/google-chrome/Default Kernel version: 3.2.0-31-generic-pae Ubuntu 12.04 Best regards.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • Cannot open Pivot Table source file

    - by Ken
    Excel Pivot table error is: Cannot open Pivot Table source file C:\Users\UserName\AppData\Roaming\Microsoft\Excel\DatabaseName (version1).TableName I’ve seen other questions and answers with the same topic, but I think this is different. I believe I know why the error is occurring: Excel closed unexpectantly and did autosave with (version1) attached to the original file name and saved it in the C:\User etc. above , which is the default recovery location. I opened the recovered file in Excel, saved it as version1 on the server where the original file was located, deleted the original file, and renamed the version1 to the original name. When I go to PivotTable Tools? Options? Change Data Source, it shows only the Table and Range, which are correct, but it does not show the file name or path. The version1 and the renamed file both had the same structure, so the same source table was in both, by they were different files. How do I change the source file from what it is looking for to my renamed file? PS- The (version1) file that it says it is looking for is not in the autosave location, i.e. it is not at the path where it says it is looking in. Thank you for any help Ken

    Read the article

  • How to configure fastcgi to work with ligttpd in ubuntu

    - by michael
    I am able to run lighttpd on ubuntu 9.10. But when i tried to setup fastcgi with lighttpd by putting this in the ligttpd.conf file: #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => "9098", "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) This is what I get in the error.log in ligttpd: 2010-03-07 21:00:11: (log.c.166) server started 2010-03-07 21:00:11: (mod_fastcgi.c.1104) the fastcgi-backend /usr/local/bin/cgi-fcgi failed to start: 2010-03-07 21:00:11: (mod_fastcgi.c.1108) child exited with status 1 /usr/local/bin/cgi-fcgi 2010-03-07 21:00:11: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2010-03-07 21:00:11: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2010-03-07 21:00:11: (server.c.931) Configuration of plugins failed. Going down. I do have cgi-fcgi in /usr/local/bin: $ which cgi-fcgi /usr/local/bin/cgi-fcgi '/usr/local/bin/cgi-fcgi' is the executable after I download and compile fast-cgi. Here is my lighttpd conf file: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you for your help.

    Read the article

  • Bridged network on OS X only gets UDP broadcast traffic

    - by a paid nerd
    I've created a bridged network Mac OS X 10.8.5 using ifconfig and TUNTAP for OS X to bridge my wireless connection, en0, with a virtual interface, tap0, which I can use for guest VMs: $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 28:cf:xx:xx:xx:xx inet6 xxxx::xxxx:xxxx:xxxx:xxxx%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.64 netmask 0xffffff00 broadcast 192.168.100.1 media: autoselect status: active bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) However, if I tcpdump -i tap0, I only see broadcast traffic. Shouldn't I see a mirror of everything on en0? (192.168.100.33, the host doing the broadcasting, is another unrelate, noisy server on my LAN.) (I asked a similar question here and will probably close it.)

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >