Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 288/1086 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • Windows 8 - Ubuntu dual boot

    - by Serkan Özkan
    I bought a new Toshiba s855 notebook with windows 8 preinstalled. Secure boot feature was enabled by default. I installed latest version of ubuntu after disabling secure boot feature(it was not possible to install ubuntu without disabling secure boot). But now when I enable secure boot, the system automatically boots into windows 8, and it boots into ubuntu when I disable secure boot. EasyBCD lists the following boot entries but I can only see Windows 8 in boot menu: Default: Windows 8 Timeout: 7 seconds EasyBCD Boot Device: C:\ Entry #1 Name: Ubuntu BCD ID: {971641cd-304a-11e2-be82-806e6f6e6963} Device: \Device\HarddiskVolume2 Bootloader Path: \EFI\ubuntu\grubx64.efi ... Entry #5 Name: Windows 8 BCD ID: {current} Drive: C:\ Bootloader Path: \windows\system32\winload.efi Any recommendations will be appreciated.

    Read the article

  • How to change theme in Windows 7 with Powershell script?

    - by Greg McGuffey
    I would like to have a script that would change the current theme of Windows 7. I found the registry entry where this stored, but I apparently need to take some further action to get windows to load the theme. Any ideas? Here is the script that I'm trying to use, but isn't working (registry updated, but theme not changed): ###################################### # Change theme by updating registry. # ###################################### # Define argument which defines which theme to apply. param ( [string] $theme = $(Read-Host -prompt "Theme") ) # Define the themes we know about. $knownThemes = @{ "myTheme" = "mytheme.theme"; "alien" = "oem.theme" } # Identify paths to user themes. $userThemes = " C:\Users\yoda\AppData\Local\Microsoft\Windows\" # Get name of theme file, based on theme provided $themeFile = $knownThemes["$theme"] # Build path to theme and set registry. $newThemePath = "$userThemes$themeFile" $regPath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Themes\" Set-ItemProperty -path $regPath -name CurrentTheme -value $newThemePath # Update system with this info...this isn't working! rundll32.exe user32.dll, UpdatePerUserSystemParameters Thanks!

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Compiling GCC or Clang for thumb drive on OSX

    - by user105524
    I have a mac book that I don't have admin rights to which I would like to be able to use either GCC or clang. Since I lack admin right I can't install binutils or a compiler to /usr directory. My plan is to install both of these (using an old macbook that I do have admin rights for) to a flash drive and then run the compiler off of there. How would one go building gcc or clang so that it could run just off of a thumb drive? I've tried both but haven't had any success. I've tried doing it defining as many of the directories as possible through configure, but haven't been able to successfully build. My current configure script for gcc-4.8.1 is (where USB20D is the thumb drive): ../gcc-4.8.1/configure --prefix=/Volumes/USB20FD/usr \ --with-local-prefix=/Volumes/USB20FD/usr/local \ --with-native-system-header-dir=/Volumes/USB20FD/usr/include \ --with-as=/Volumes/USB20FD/usr/bin/as \ --enable-languages=c,c++,fortran\ --with-ld=/Volumes/USB20FD/usr/bin/ld \ --with-build-time-tools=/Volumes/USB20FD/usr/bin \ AR=/Volumes/USB20FD/usr/bin/ar \ AS=/Volumes/USB20FD/usr/bin/as \ RANLIB=/Volumes/USB20FD/usr/bin/ranlib \ LD=/Volumes/USB20FD/usr/bin/ld \ NM=/Volumes/USB20FD/usr/bin/nm \ LIPO=/Volumes/USB20FD/usr/bin/lipo \ AR_FOR_TARGET=/Volumes/USB20FD/usr/bin/ar \ AS_FOR_TARGET=/Volumes/USB20FD/usr/bin/as \ RANLIB_FOR_TARGET=/Volumes/USB20FD/usr/bin/ranlib \ LD_FOR_TARGET=/Volumes/USB20FD/usr/bin/ld \ NM_FOR_TARGET=/Volumes/USB20FD/usr/bin/nm \ LIPO_FOR_TARGET=/Volumes/USB20FD/usr/bin/lipo CFLAGS=" -nodefaultlibs -nostdlib -B/Volumes/USB20FD/bin -isystem/Volumes/USB20FD/usr/include -static-libgcc -v -L/Volumes/USB20FD/usr/lib " \ LDFLAGS=" -Z -lc -nodefaultlibs -nostdlib -L/Volumes/USB20FD/usr/lib -lgcc -syslibroot /Volumes/USB20FD/usr/lib/crt1.10.6.o " Any obvious ideas of which of these options need to be turned on to install the appropriate files on the thumb drive during installation? What other magic occurs during xcode installation which isn't occurring here? Thanks for any suggestions

    Read the article

  • Ways to organize interface and implementation in C++

    - by Felix Dombek
    I've seen that there are several different paradigms in C++ concerning what goes into the header file and what to the cpp file. AFAIK, most people, especially those from a C background, do: foo.h class foo { private: int mem; int bar(); public: foo(); foo(const foo&); foo& operator=(foo); ~foo(); } foo.cpp #include foo.h foo::bar() { return mem; } foo::foo() { mem = 42; } foo::foo(const foo& f) { mem = f.mem; } foo::operator=(foo f) { mem = f.mem; } foo::~foo() {} int main(int argc, char *argv[]) { foo f; } However, my lecturers usually teach C++ to beginners like this: foo.h class foo { private: int mem; int bar() { return mem; } public: foo() { mem = 42; } foo(const foo& f) { mem = f.mem; } foo& operator=(foo f) { mem = f.mem; } ~foo() {} } foo.cpp #include foo.h int main(int argc, char* argv[]) { foo f; } // other global helper functions, DLL exports, and whatnot Originally coming from Java, I have also always stuck to this second way for several reasons, such as that I only have to change something in one place if the interface or method names change, that I like the different indentation of things in classes when I look at their implementation, and that I find names more readable as foo compared to foo::foo. I want to collect pro's and con's for either way. Maybe there are even still other ways? One disadvantage of my way is of course the need for occasional forward declarations.

    Read the article

  • Is learning how to use C (or C++) a requirement in order to be a good (excellent) programmer?

    - by blueberryfields
    When I first started to learn how to program, real programmers could write assembly in their sleep. Any serious schooling in computer science would include a hefty bit of training and practice in programming using assembly. That has since changed, to the point where I see Computer Science degrees with assembly, if included at all, is relegated to one assignment, and one chapter, for a total of two weeks' work out of 4 years' schooling. C/C++ programming seems to have followed a similar path. I'm no longer surprised to interview university graduates who have not spent more than two weeks programming in C++, and have only read of C in a book somewhere. While the most serious CS degrees still seem to include significant time learning and using one or both of the languages, the trend is clearly towards less enforced C/C++ in school. It's clearly possible to make a career producing good work without ever reading or writing a single line of C or C++ code. Given all of that, is learning the two languages worth the effort? Are they at all required to excel? (beyond the obvious, non-language specific advice, such as "a good selection of languages is probably important for a comprehensive education", and "it's probably a good idea to keep trying out and learning new languages throughout a programmers' career, just to stretch the gray cells")

    Read the article

  • SSL support with Apache and Proxytunnel

    - by whuppy
    I'm inside a strict corporate environment. https traffic goes out via an internal proxy (for this example it's 10.10.04.33:8443) that's smart enough to block ssh'ing directly to ssh.glakspod.org:443. I can get out via proxytunnel. I set up an apache2 VirtualHost at ssh.glakspod.org:443 thus: ServerAdmin [email protected] ServerName ssh.glakspod.org <!-- Proxy Section --> <!-- Used in conjunction with ProxyTunnel --> <!-- proxytunnel -q -p 10.10.04.33:8443 -r ssh.glakspod.org:443 -d %host:%port --> ProxyRequests on ProxyVia on AllowCONNECT 22 <Proxy *> Order deny,allow Deny from all Allow from 74.101 </Proxy> So far so good: I hit the Apache proxy with a CONNECT and then PuTTY and my ssh server shake hands and I'm off to the races. There are, however, two problems with this setup: The internal proxy server can sniff my CONNECT request and also see that an SSH handshake is taking place. I want the entire connection between my desktop and ssh.glakspod.org:443 to look like HTTPS traffic no matter how closely the internal proxy inspects it. I can't get the VirtualHost to be a regular https site while proxying. I'd like the proxy to coexist with something like this: SSLEngine on SSLProxyEngine on SSLCertificateFile /path/to/ca/samapache.crt SSLCertificateKeyFile /path/to/ca/samapache.key SSLCACertificateFile /path/to/ca/ca.crt DocumentRoot /mnt/wallabee/www/html <Directory /mnt/wallabee/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <!-- Need a valid client cert to get into the sanctum --> <Directory /mnt/wallabee/www/html/sanctum> SSLVerifyClient require SSLOptions +FakeBasicAuth +ExportCertData SSLVerifyDepth 1 </Directory> So my question is: How to I enable SSL support on the ssh.glakspod.org:443 VirtualHost that will work with ProxyTunnel? I've tried various combinations of proxytunnel's -e, -E, and -X flags without any luck. The only lead I've found is Apache Bug No. 29744, but I haven't been able to find a patch that will install cleanly on Ubuntu Jaunty's Apache version 2.2.11-2ubuntu2.6. Thanks in advance.

    Read the article

  • .desktop shortcuts aren't working for java applications in LXDE

    - by chaz
    I just installed minecraft on my LXDE desktop/Lubuntu machine and I'm trying to create a .desktop file on the desktop that executes java -jar ~/minecraftlauncher.jar. The command works in bash scripts and the terminal but refuses to work when I click on my .DESKTOP shortcut which is suppose to execute the same command. I've experimented with other jars and they can't seem to start too. Here is my xsession log: ** (pcmanfm:1572): DEBUG: launch command: <java -jar ~/Downloads/minecraft_server.jar> ** (pcmanfm:1572): DEBUG: sn_id = pcmanfm-1572-administrator-Dimension-3000-java-14_TIME14031891 Unable to access jarfile ~/Downloads/minecraft_server.jar ** (pcmanfm:1572): DEBUG: launch command: <java -jar ~/minecraftlauncher.jar> ** (pcmanfm:1572): DEBUG: sn_id = pcmanfm-1572-administrator-Dimension-3000-java-15_TIME14070158 Unable to access jarfile ~/minecraftlauncher.jar UPDATE: Whoops, it seems to work when I give an absolute path. I guess the home path is something else. UPDATE: I guess X doesn't resolve the home specifier. I ran a .desktop file that executed a script that outputs the current directory, and it seems to be correct.

    Read the article

  • URI Scheme, launch program in its directory

    - by ZaKlaus
    I have registered URI scheme for my app. When I open it with "Run.." or in browser, it runs in hosted directory. For ex. Ive opened url in webpage, program's working dir is in browser. What I want? I want to run program test.exe located at C:\data\test.exe and to use dir. C:\data so it could use other data in relative path. so test.exe would access file .\file.txt without using absolute path Hope I wrote it understandable, sorry for bad English.

    Read the article

  • Using pscp and getting permission denied

    - by Espen
    I'm using pscp to transfer files to a virtual ubuntu server using this command: pscp test.php user@server:/var/www/test.php and I get the error permission denied. If I try to transfer to the folder /home/user/ I have no problems. I guess this has to do with that the user I'm using doesn't have access to the folder /var/www/. When I use SSH I have to use sudo to get access to the /var/www/ path - and I do. Is it possible to specify that pscp should "sudo" transfers to the server so I can get access to the /var/www/ path and actually be able to transfer files to this folder?

    Read the article

  • How to handle Real Time Data from a database perspective?

    - by balexandre
    I have an idea in mind, but it still confuses me the database area. Imagine that I want to show real time data, and using one of the latest browser technologies (web sockets - even using older browsers) it is very easy to show to all observables (user browser) what everyone is doing. Remy Sharp has an example about the simplicity about this. But I still don't get the database part, how would I feed, let's imagine (using Remy game Tron) that I want to save the path for each connected user in a database and if a client wants to see what is going on with a 5 sec delay, he will see that, not only the 5 sec until that moment but the continuation in time ... how can I query a DB like that? SELECT x, y FROM run WHERE time >= DATEADD(second, -5, rundate); is not the recommended path right? and pulling this x in x time ... this is not real data feed correct? If can someone help me understand the Database point of view, I would greatly appreciate.

    Read the article

  • IIS permissions issue pointing docroot to Samba share

    - by lalalalalalalambda
    I have an IIS project which is stored on a Samba shared, network mounted with the following line: X: \\my-samba-server\dev /user:freddie Connectivity is fine, can read/write files from X:. In IIS, I'm trying to set it as the Physical path via \\my-samba-server\dev\folder\to\my\files, which results in the following 500.19 error: Config Error | Cannot read configuration file due to insufficient permissions It is by default trying to use the Pass-through authentication. If I try to set this to connect as the specific user freddie, I receive: The specified user does not exist What is the correct way to connect to a path which has been setup as described above? *Samba man pages indicate version 3.6 is on the Debian host

    Read the article

  • Mac OS X 10.5/6, authenticate against by NIS or LDAP when both servers have your username

    - by Wang
    We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record? I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it. Is there any way to make the OS check the rest of the search path after it finds the username?

    Read the article

  • After upgrading to trusty, ALSA midi connection (aconnect) doesn't seem to work right

    - by SougonNaTakumi
    Previously in kubuntu 13.10 I was able to open vmpk or plug in a midi keyboard, and provided that TiMidity was running in server mode, I could run aconnect [keyboard port (129:0 for vmpk)] 14:0 aconnect 14:0 128:0 and I could play the keyboard and get sound. But now, a while after upgrading to trusty, I tried to do that, and didn't get any sound. TiMidity itself still plays files fine, but if I try to play them with aplaymidi, I still just get silence. Oddly, the midi files are clearly being read. When I ran (where 130:0 was vmpk's input port) aplaymidi -p 130:0 ~/path/to/midi.mid vmpk was highlighting notes on the piano as if it were playing the midi. One time I tried this, TiMidity (?) very briefly played a fraction of a second of the first chord of my song before everything went silent and vmpk just highlighted the first voice on the keyboard as usual. Now the weirdest part of this is that probably about 40% of the time, when I've played at least one note with either aplaymidi or vmpk, when I run aconnect -x I get a sudden burst of a note or chord from my speakers (that is, if I played one note, I get a note; if I played multiple sequential notes, they turn into a chord), as if the notes were being queued up but not being played and that somehow liberated them. I have no idea what's going on there. A little while ago I remember having a problem with Audacity playing wav files sped up and also locking up if I tried to pause it, which it stopped doing when I set the audio devices to the actual audio devices rather than pulse. But now when I checked again, it's doing the opposite: it won't play audio at all and/or acts weirdly if I don't set the audio devices to pulse, and either way will very occasionally randomly do the speeding up thing regardless. Oddly in the midst of what's looking like a pretty screwed up sound system, sound in VLC and Firefox has been working fine and if I play a wav file with aplay ~/path/to/sound.wav that works fine too. Any idea what I could do to figure out what's wrong with ALSA and/or fix it?

    Read the article

  • Add user in CentOS 5

    - by Ron
    I created a new user in my CentOS web server with useradd. Added a password with passwd. But I can't log in with the user via SSH. I keep getting 'access denied'. I checked to make sure that the password was assigned and that the account is active. /var/log/secure shows the following error: Aug 13 03:41:40 server1 su: pam_unix(su:auth): authentication failure; logname= uid=500 euid=0 tty=pts/0 ruser=rwade rhost= user=root Please help, Thanks Thanks for the responses so far: I should add that it is a VPS on a remote computer, fresh out of the box. I can log in as the root user quite fine. I can also su to the new user, but I cannot log in as the new user. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • Automatically mount a remote folder on boot

    - by Andrew
    I'm trying to mount a Windows folder on my Ubuntu machine on start up. I've tried following this page here, modifying /etc/fstab and appending sshfs#my_user@remote_host:/path/to/directory <local_mount_point> fuse user 0 0 to it, but it fails; on start up, I get an error saying that the mounting failed, and I can press S to skip or M to recover manually. I also tried following this page here, appending /usr/bin/sshfs -o idmap=user my_user@remote_host:/path/to/directory <local_mount_point> to the /etc/rc.local file, but this doesn't help either; Ubuntu just boots up normally without mounting. I have Cygwin installed on my Windows machine, and I can run everything smoothly, such as sshing without passwords, and mounting it manually. I've also tried to run the modified rc.local file $ /etc/rc.local, and it works perfectly, but I just can't seem to get the folder mounted on start up. Can someone help me?

    Read the article

  • Location of development solutions on disk - Common or upto the individual

    - by dreza
    In our team meeting today a senior member brought up the proposal that we should be having a common location/structure for our development solutions. A couple of his points were: Making it common meant when talking about projects and emailing stuff everyone is on the same wavelength and knows where to look. If there is ever the need to hard code a location path then it will work across all developers pc's. He had a more few points to back up his suggestion but I unfortunately got distracted during the discussion and so didn't hear all of them. I have no issue with the idea and can see it's merits but I was wondering if it is common or even recommended that all developers place their code in the same folder structure. Or do developers like to have the flexibility of location solutions where-ever they want? We currently use SVN for our version control. In this case his recommendation was to place all code in: c:\Work\Development\<Customer>\<project>\Code\<solution>\ the code I guess actual path is irrelevant for this question but added for completeness.

    Read the article

  • Handling FreeBSD package upgrades using pkg_add

    - by larsks
    I'm trying to use FreeBSD's pkg_add command to install and upgrade binary packages in a build-once-install-on-multiple-machines sort of scenario. It works well when installing a new package, but upgrades are baffling me. For example, if I want to upgrade a package that is depended on by another package, I can't just install it: # pkg_add /path/to/somepackage-2.0.tbz pkg_add: package 'somepackage' or its older version already installed At this point, I can delete the older version of the package if I pass -f to the pkg_delete command: # pkg_delete -f somepackage-1.0 pkg_delete: package 'somepackage-1.0' is required by these other packages and may not be deinstalled (but I'll delete it anyway): anotherpackage-1.0 But...and this is the killer...now the dependency information is gone! I can install the upgrade: # pkg_add /path/to/somepackage-2.0.tbz And now attempts to delete it will succeed without any errors: # pkg_delete somepackage-2.0 How do I handle this gracefully (whereby "gracefully" means "in a fashion that preserves dependency information without requiring me to rebuild/reinstall and entire dependency chain"). Thanks!

    Read the article

  • OBIEE 11.1.1.6.5 Bundle Patch released Oct 2012

    - by user554629
    October  2012 OBIEE 11.1.1.6.5 Bundle Patch released Bundle patches are collection of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series would include the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. For OBIEE on 11.1.1.6.0, we plan to run a monthly bundle patch cadence. 11.1.1.6.5 bundle patch- available for download from  My Oracle Support . - is cumulative, so it includes everything from previous updates- available for supported platforms ( Windows, Linux, Solaris, AIX, HPUX-IA ) Navigate to https://support.oracle.com and login- Knowledge Base tab  Select a product line [ Business Intelligence ]  Select a Task [ Patching and Maintenance ]  Click Search- Oct 23, 2012, OBIEE 11g: Required and Recommended Patches and Patch Sets, ID 1488475.1- 11.1.1.6.5 Published 19th October 2012 Note: The 11.1.1.6 versions on top of 11.1.1.6.0 are not upgrades, they are opatch fixes.  This is not an upgrade process like from OBIEE 10g to 11g, or from OBIEE 11.1.1.5 to 11.1.1.6.  It is much safer than applying any one-off fixes, which are not regression tested.  You will be more successful using 11.1.1.6.5.  

    Read the article

  • USB sector 0 not fount Kingston USB DT100 G2

    - by java
    Windows constantly asks me "Foramt Disk". when i go to command prompt and type format H: /fs:ntfs or format H: /fs:fat32 response: Cannot determine the number of sectors on this volume. if the benefit DISKPART detail disk Kingston DT 100 G2 USB Device Disk ID: 00000000 Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : No Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No DISKPART detail volume Read-only : No Hidden : No No Default Drive Letter: No Shadow Copy : No Offline : No BitLocker Encrypted : No Installable : No Volume Capacity : 0 B Volume Free Space : 0 B what the problem?

    Read the article

  • Linking Libraries in iOS?

    - by Bob Dole
    This is probably a totally noob question but I have missing links in my mind when thinking about linking libraries in iOS. I usually just add a new library that's been cross compiled and set the build and linker paths without really know what I'm doing. I'm hoping someone can help me fill in some gaps. Let's take the OpenCV library for instance. I have this totally working btw because of a really well written tutorial( http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en ), but I'm just wanting to know what is exactly going on. What I'm thinking is happening is that when I build OpenCV for iOS is that your creating object code that gets placed in the .a files. This object code is just the implementation files( .m ) compiled. One reason you would want to do this is to make it hard to see the source code and so that you don't have to compile that source code every time. The .h files won't be put in the library ( .a ). You include the .h in your source files and these header files communicate with the object code library ( .a ) in some way. You also have to include the header files for your library in the Build Path and the Library itself in the Linker Path. So, is the way I view linking libraries correct? If , not can someone correct me on this ?

    Read the article

  • Win 7 Explorer backup and long paths

    - by user53299
    I use Explorer to do backups because Win 7's backup program asks me to take backups previously done and to put them back in the drive. I am opposed to that idea since I believe backups should remain in storage. With Explorer backups (burn and burn to disc) I have encountered the "destination path too long" error message and it shows the name of a folder "Debug" three times. I have hundreds of folders named "Debug" thanks to Visual Studio. At this moment I'm too angry at Microsoft to write a program to determine my 3 longest paths. (Aside: This is all after coincidentally reading two articles about path junctions earlier this evening which already made me kind of unhappy.) Please, is there an easy way to continue to make backups with Explorer? Edit: I should add that renaming paths wrecks Visual Studio projects so I really need to isolate the small number of problem paths or find a cleaner solution.

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >