Search Results

Search found 14142 results on 566 pages for 'missing symbols'.

Page 387/566 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • ubuntu: Installed php-mcrypt but it doesn't show up in phpinfo()

    - by jules
    A web app I'm trying to install on my ubuntu 10.04 LTS requires mcrypt, and is generating this error: Fatal error: Call to undefined function mcrypt_module_open(). I know this is the same question as this one: Installed php-mcrypt but it doesn't show up in phpinfo(), but I tried several things, none of which worked, and have additional questions. I would comment on the original thread but don't have enough reputation to do so; forgive me for the duplicate question. My versions of php and mcrypt are (both installed via apt-get): php: 5.3.2-1ubuntu4.10 mcrypt: 5.3.2-0ubuntu Doing a php -m shows that the mcrypt module is installed. I installed mcrypt and php5-mcrypt via apt-get. Also, I'm using nginx as my web server. I have tried reinstalling mcrypt and restarting nginx, but still can't get mcrypt to show up on phpinfo() and calls to mcrypt are still broken. Here is some more info: $ php -i | grep "mcrypt" /etc/php5/cli/conf.d/mcrypt.ini, mcrypt mcrypt support => enabled mcrypt.algorithms_dir => no value => no value mcrypt.modes_dir => no value => no value I also checked that mcrypt is on in /etc/php5/cli/conf.d/mcrypt.ini and /etc/php5/cgi/conf.d/mcrypt.ini. Lastly, I'm using fastCGI with nginx. I googled around and saw suggestions to restart php5-fpm. I couldn't find php5-fpm in apt-get, I'm not sure if I still need php5-fpm since I already have fastCGI. Is there anything else I'm missing?

    Read the article

  • puppet master --compile logs errors to stdout

    - by danny
    I see a bug about this that was accepted and then closed a year ago: http://projects.puppetlabs.com/issues/3670 but I'm using puppet 2.7.14 and am getting the same issue. I'm trying to use "puppet solo" (i.e. just running puppet apply on each server to be configured) as I only have 2 or 3 servers in this project and adding another server as a puppetmaster would be completely overkill. Unless I'm mistaken, the best way to apply a node manually to a server is to do: puppet master --compile=mynode > catalog.json puppet apply --catalog catalog.json But the puppet master command outputs a couple of warnings and notices to stdout, mixed in with the desired json content. And it uses colored output so I can't just pipe it through egrep -v '^warning:' EDIT: I guess it's not too big of a deal to use grep - since puppet 2.7 pretty-prints the actual content and the warnings don't ever start with spaces, piping the output through egrep '^( |{|})' works So my questions are basically: Is there a better way than this to apply a puppet node without using a puppetmaster? I can't really find any good references online to using puppet without a puppetmaster, even though that seems like a perfectly reasonable thing to do for a small project. Is there a setting or flag that I'm missing that will get puppet master to stop being an asshole and send its errors to stderr instead of stdout? Or do I really have to turn off color logging, then grep to exclude warning: and notice: lines?

    Read the article

  • Skype can not find libssl.so.10 on 64-bit Fedora Linux

    - by itpastorn
    Skype will not start: $ skype & skype: error while loading shared libraries: libssl.so.10: wrong ELF class: ELFCLASS64 $ ldd /usr/bin/skype |grep ssl libssl.so.10 => not found OK, missing libssl. Where is it? $ ls -l /usr/lib/libssl.so* lrwxrwxrwx. 1 root root ... /usr/lib/libsssl.so -> libcrypto.so.1.0.1e lrwxrwxrwx. 1 root root ... /usr/lib/libssl.so.10 -> libssl.so.6 -rwxr-xr-x. 1 root root ... /usr/lib/libssl.so.1.0.1e lrwxrwxrwx. 1 root root ... /usr/lib/libssl.so.6 -> /usr/lib64/libssl.so.10 OK, it points to libssl.so.6 which in turns points to the 64-bit version. $ ls -l /usr/lib64/libssl.so* lrwxrwxrwx. 1 root root ... /usr/lib64/libssl.so.10 -> libssl.so.1.0.1e -rwxr-xr-x. 1 root root ... /usr/lib64/libssl.so.1.0.1e lrwxrwxrwx. 1 root root ... /usr/lib64/libssl.so.6 -> /usr/lib64/libssl.so.10 So, why is my linkchain not picked up by Skype? (Identical problem exists with libcrypto, BTW).

    Read the article

  • Outside VPN traffic not able to ping site-to-site VPN remote site

    - by Siriss
    we have two ASA 5510s running 8.4 in a site-to-site VPN setup. All internal traffic is working smoothly. Site/Subnet A: 192.100.0.0 - local Site/Subnet B: 192.200.0.0 - remote VPN Users: 192.100.40.0 - assigned by ASA When you VPN into the network, all traffic hits Site A, and everything on subnet A is accessible. Site B however, is completely inaccessible for VPN users. All machines on subnet B, the firewall itself, etc... is not reachable by ping or otherwise. I know I am missing a NAT rule, and in 8.2, it was easy as pie to setup using ASDM, but now I can't get it for the life of me as 8.4 apparently made a lot of changes to NAT rules. I am not too comfortable in the ASA command line, but if there is a command I need to add or if you could direct me where I can add this in 8.4 ASDM I would really appreciate it. I have tired NAT Exempt, Static NAT, Static NAT Policies, etc... I think I tried all the options. I also might have my interfaces confused with the new look at feel of ASDM. Thank you much in advance and I hope I have been thorough enough.

    Read the article

  • ADSL with RFC 2684 Bridging

    - by Axel Isouard
    My new ADSL line is now enabled, I can finally use my Netgear DM111Pv2 to use to the Internet. My ISP has told me a big surprise : I don't need to use a login and a password to connect to the Internet, then I must use the RFC 2684 bridging mode. It works pretty fine on the ADSL modem's side, but I've spent one night trying to figure out how to connect to the Internet through this modem. I only have a Fonera 2.0n and a computer running Gentoo Linux. I've been trying to use the br2684ctl utility with brctl on my Gentoo, first I've configured my kernel in that way : CONFIG_PPP=y CONFIG_PPP_BSDCOMP=y CONFIG_PPP_DEFLATE=y # CONFIG_PPP_FILTER is not set CONFIG_PPP_MPPE=y # CONFIG_PPP_MULTILINK is not set CONFIG_PPPOATM=y CONFIG_PPPOE=y CONFIG_PPP_ASYNC=y CONFIG_PPP_SYNC_TTY=y [...] CONFIG_ATM=y CONFIG_ATM_CLIP=y CONFIG_ATM_CLIP_NO_ICMP=y CONFIG_ATM_LANE=y CONFIG_ATM_MPOA=y CONFIG_ATM_BR2684=y # CONFIG_ATM_BR2684_IPFILTER is not set And I still get these messages : cirus nais # br2684ctl -b -c 0 -e 0 -a 8.35 br2684ctl[8041]: Interface "nas0" created sucessfully br2684ctl[8041]: Communicating over ATM 0.8.35, encapsulation: LLC br2684ctl[8041]: Fatal: failed to connect on socket; No such device The brctl utility keeps telling me "Invalid argument" each time I try to add the nas0 interface into my bridge, I'm honestly hoping I'm doing wrong. I've been following this README carefully and this tutorial on setting up a PPPoE connection with Gentoo, but the PPPoE interface just tries to start, and nothing special related to PPP happens, I can't see the interface when I do ifconfig. So, I'm asking you if there's something huge I've been missing since the beginning ! Maybe I should wait to buy a new router fully supporting the RFC2684 bridging mode, but I'm more interested in setting up this mode on my Fonera 2.0n and even my Raspberry Pi !

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • chrooting php-fpm with nginx

    - by dragonmantank
    I'm setting up a new server with PHP 5.3.9 and nginx, so I compiled PHP with the php-fpm SAPI options. By itself it works great using the following server entry in nginx: server { listen 80; server_name domain.com www.domain.com; root /var/www/clients/domain.com/www/public; index index.php; log_format gzip '$remote_addr - $remote_user [$time_local] "$request" $status $bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"'; access_log /var/www/clients/domain.com/logs/www-access.log; error_log /var/www/clients/domain.com/logs/www-error.log error; location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/clients/domain.com/www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } } It servers my PHP files just fine. For added security I wanted to chroot my FPM instance, so I added the following lines to my conf file for this FPM instance: # FPM config chroot = /var/www/clients/domain.com and changed the nginx config: #nginx config for chroot location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } With those changes, nginx gives me a File not found message for any PHP scripts. Looking in the error log I can see that it's prepending the root path to my DOCUMENT_ROOT variable that's passed to fastcgi, so I tried to override it in the location block like this: fastcgi_param DOCUMENT_ROOT /www/public/; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; but I still get the same error, and the debug log shows the full, unchrooted path being sent to PHP-FPM. What am I missing to get this to work?

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • syslog-ng and nging logs to mysql

    - by Katafalkas
    So couple of days ago I asked how to log php and nginx logs to centralized MySQL database, and m0ntassar gave a perfect answer :) cheer ! The problem I am facing now is that I can not seem to get it working. syslog-ng version: # syslog-ng --version syslog-ng 3.2.5 This is my nginx log format: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; syslog-ng source: source nginx { file( "/var/log/nginx/tg-test-3.access.log" follow_freq(1) flags(no-parse) ); }; syslog-ng destination: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("superpasswd") database("syslog") table("nginx") columns("remote_addr","remote_user","time_local","request","status","body_bytes_sent","http_ referer","http_user_agent","http_x_forwarded_for") values("$REMOTE_ADDR", "$REMOTE_USER", "$TIME_LOCAL", "$REQUEST", "$STATUS","$BODY_BYTES_SENT", "$HTTP_REFERER", "$HTTP_USER_AGENT", "$HTTP_X_FORWARDED_FOR")); }; MySQL table for testing purposes: CREATE TABLE `nginx` ( `remote_addr` varchar(100) DEFAULT NULL, `remote_user` varchar(100) DEFAULT NULL, `time` varchar(100) DEFAULT NULL, `request` varchar(100) DEFAULT NULL, `status` varchar(100) DEFAULT NULL, `body_bytes_sent` varchar(100) DEFAULT NULL, `http_referer` varchar(100) DEFAULT NULL, `http_user_agent` varchar(100) DEFAULT NULL, `http_x_forwarded_for` varchar(100) DEFAULT NULL, `time_local` text, `datetime` text, `host` text, `program` text, `pid` text, `message` text ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now first thing that goes wrong is when I restart syslog-ng: # /etc/init.d/syslog-ng restart Stopping syslog-ng: [ OK ] Starting syslog-ng: WARNING: You are using the default values for columns(), indexes() or values(), please specify these explicitly as the default will be dropped in the future; [ OK ] I have tried creating a file destination and it all works fine, and then I have tried replacing my destination with: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("kosmodromas") database("syslog") table("nginx") columns("datetime", "host", "program", "pid", "message") values("$R_DATE", "$HOST", "$PROGRAM", "$PID", "$MSGONLY") indexes("datetime", "host", "program", "pid", "message")); }; Which did work and it was writing stuff to mysql, The problem is that I want to write stuff to in exact format as nginx log format is. I assume that I am missing something really simple or I need to do some parsing between source and destination. Any help will be much appreciated :)

    Read the article

  • Windows 7 - system error 5 problem

    - by Ian
    My wife has just had a new computer for Christmas (with an upgrade from VISTA to Windows 7), and has joined the home network. We are using a mix of WindowsXP and Ubuntu boxes linked via a switch. We are all in the same workgroup. (No domain). Internet access, DHCP, and DNS server is an SME server that thinks it is domain controller (although we are not using a domain). I need to run a script to back up my wife's machine (venus). In the past the script creates a share on a machine with lots of space (leda), and then executes the line. PSEXEC \\venus -u admin -p adminpassword -c -f d:\Progs\snapshot.exe C: \\leda\Venus\C-drive.SNA With the wife's old XP machine, this would run the sysinternals utility, copy shapshot,exe to her machine and run it, which would then back up her C: drive to the share on leda. I cannot get this to work with Windows 7, nor can I link through to the C$ share on her machine. This gives me a permissions error (system error 5). The admin account is a full admin account. And yes - I do know the password. The ordinary shares on her machine work fine! I guess I'm missing something that Microsoft have built into Windows 7 - but what? The machine is running Windows 7 business, with windows firewall, AVG anti virus, and all the crap-ware you get with a new PC removed. Thanks

    Read the article

  • Fedora, ssh and sudo

    - by Ricky Robinson
    I have to run a script remotely on several Fedora machines through ssh. Since the script requires root priviliges, I do: $ ssh me@remost_host "sudo touch test_sudo" #just a simple example sudo: no tty present and no askpass program specified The remote machines are configured in such a way that the password for sudo is never asked for. For the above error, the most common fix is to allocate a pseudo-terminal with the -t option in ssh: $ ssh -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Let's try to force this allocation with -t -t: $ ssh -t -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Nope, it doesn't work. In /etc/sudoers of course I have this line: #Defaults requiretty ... but I can't manually change it on tens of remote machines. Am I missing something here? Is there an easy fix? EDIT: Here is the sudoers file of a host where ssh me@host "sudo stat ." works. Here is the sudoers file of a host where it doesn't work. EDIT 2: Running tty on a host where it works: $ ssh me@host_ok tty not a tty $ ssh -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. $ ssh -t -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. Now on a host where it doesn't work: $ ssh me@host_ko tty not a tty $ ssh -t me@host_ko tty not a tty Connection to host_ko closed. $ ssh -t -t me@host_ko tty not a tty Connection to host_ko closed.

    Read the article

  • How do I troubleshoot a segfault in Ubuntu that occurs when typing a bogus command?

    - by Alan
    We've got a production server running Ubuntu 11.10. We're encountering segfaults that appear under various conditions. The simplest reproducible case is when we login to an ssh session as our administrative user and enter a bogus command. You'd expect the standard "command not found" error message. Instead, we get a segfault in python. The user's default shell is /bin/bash. For example: $ asdf Segmentation fault Info from /var/log/syslog: Jul 6 15:39:20 PROD001 kernel: [2155960.605695] python[7873]: segfault at 0 ip (null) sp 00007fffd030b808 error 14 in python2.7[400000+233000] Some details about the server: $ uname -a Linux PROD001 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/issue Ubuntu 11.10 \n \l Before we ask the IT department to reinstall the O.S., I'd like to understand what got us here. The system and/or this particular user's environment is suspect. Many people have touched this server over the past year, so I'm wondering if it is missing libraries, incorrectly installed packages, etc. I'm hoping that if we can understand what's going wrong in this case, it will help explain why we're getting segfaults in a couple of other scenarios. Any tips on troubleshooting this segfault will be appreciated!

    Read the article

  • Reinserted a RAID disk. Defined as foreign. Is import or clear the correct choice?

    - by Petrus
    I have re-inserted a RAID disk, on a DELL server with Windows Server 2008. The drive-status indicator was changing between a green and amber light, and the monitor gave the following message: There are offline or missing virtual drives with preserved cache. Please check the cables and ensure that all drives are present. Press any key to enter the configuration utility. I pressed a key and the PERC 6/I Integrated BIOS Configuration Utility showed that the RAID Status for that disk was Offline. After reinsertion of the disk the monitor is giving the following message: Foreign configuration(s) found on adapter. Press any key to continue or ‘C’ load the configuration utility, or press ‘F’ to import foreign configuration(s) and continue. After checking around on the net I am uncertain if I should choose import or clear. I cannot find out if an import means importing information from the array/system to the now foreign disk or the other way, i.e. importing information from the foreign disk to the array/system that was actually working fine. Also; if clear is a necessary thing to do ahead of a rebuild of that disk, or if clear means to clear the system to somehow make it ready to import the information from the foreign disk to the array/system, which is not what I want. I imagine that making the wrong choice here might be fatal. Please help clearing this out by telling what to choose and why.

    Read the article

  • nameserver spoiling avahi multicast name resolution of .local domain

    - by Doug Coburn
    After trying to ping a machine on my local network, I noticed that I was trying hit address 66.152.109.24. This is an external public address. Resolution should have occurred via avahi mDNS. I ran dig to see how the name resolution worked and my quest/centurylink name server was retuning results for my .local domain queries! I tried a random name and got the same ip address result. $ dig jakdafj.local ; <<>> DiG 9.8.1-P1-RedHat-9.8.1-3.P1.fc15 <<>> jakdafj.local ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58410 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;jakdafj.local. IN A ;; ANSWER SECTION: jakdafj.local. 10 IN A 66.152.109.24 jakdafj.local. 10 IN A 204.232.231.46 ;; Query time: 104 msec ;; SERVER: 205.171.3.25#53(205.171.3.25) ;; WHEN: Sat Mar 24 20:40:17 2012 ;; MSG SIZE rcvd: 63 Am I missing something or is my DNS name server at 205.171.3.25 corrupted?

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • Weird .#filename files on remote ssh-connected systems after mcedit

    - by etranger
    I'm using MacFusion sshfs in combination with Midnight Commander, and when I edit remote text files with mcedit, weird symlinks are created on the remote system. $ ls -l .* lrwxr-xr-x 1 user group 34 Jun 27 01:54 .#filename.txt -> [email protected] where etranger is my local login name, and mbp is a hostname of my notebook running MacOS. symlinks can be removed by running remote rm command, but cannot be deleted on the mac-fuse mounted volume and thus pollutes the filesystem. I cannot figure what part of software is responsible for this, and how I could fix this, any help is appreciated. EDIT: This appears to be mcedit behavior as documented here: https://dev.openwrt.org/ticket/8245 Apparently, sshfs fails to remove symlink to the lock file for some reason (".#" in filename, perhaps), and it pollutes the filesystem. A quick workaround is possible, using another bug of Midnight Commander: editing (F4) the broken symlink effectively converts it to a missing lock file it was supposed to point to, and removes the symlink itself. The newly created file may then be deleted normally. EDIT 2: Unchecking "Follow symlink" in MacFusion apparently allows sshfs to remove dead symlinks, so the problem disappears completely.

    Read the article

  • Configure J2EE Agent with OpenAM behind Reverse Proxy

    - by Troy
    I have a reverse proxy with two SSL enabled NamedVirtualHosts on different ports. Both containers on each internal host is GF 2.1.1. Proxy configuration as follows: Proxy URL -> Internal URL https://apps.mydomain.com -> http://apps.internal.com https://secure.otherdomain.com:8080/ -> http://secure.internal.com I initially tried configuring the J2EE agent in OpenAM and the web app container to use the internal URLs (I appended /openam and /agentapp respectively). However, I received the following errors when trying to access a secured application such as https://apps.mydomain.com/webapp. java.lang.RuntimeException: Failed to load configuration: ApplicationSSOTokenProvider.getApplicationSSOToken(): Unable to get Application SSO Token A second attempt gives the following error: java.lang.NoClassDefFoundError: Could not initialize class com.sun.identity.agents.filter.AmFilterManager Along with these in the agent debug.out: ERROR: Failed to obtain auth service url from server: null://null:null ... SiteMonitor: Site URL http://secure.internal.com/openam/namingservice is not available. If I specify the server and agent urls using the proxy urls, then the agent appears to be working and I am redirected to the OpenAM login page. However, the goto in the URL is http://apps.mydomain.com/webapp instead of https://apps.mydomain.com/webapp (missing https). So after authentication, the redirect fails. Now I could possibly get by with mod_rewrite, but it feels hackish and I really want to know what's going on. Any ideas?

    Read the article

  • Read Only Domain Controllers and DNS zone updates

    - by Mike M
    I have a Windows 2003 domain and just added a new DC that runs 2008 R2. I updated the schema accordingly for both forest and domain levels. I also made sure to run /rodcprep at the time I did this. I have a branch office with a 2008 R2 file/print server that is a read-only domain controller (DC). The one problem I have been having is with AD-integrated DNS records updates. In the data center, we had to make an IP address change on a particular server. All our other sites' DCs (2003) updated the record fine. The 2008 R2 DC in the data center also updates its record fine. However, the RODC in the branch office does not. So if I nslookup the target server on a 2003 DC, the IP address is correct. Same with the 2008 R2 DC in the data center. But an nslookup on the branch office RODC still pulls in the old IP address. Moreover, any new records we've created (e.g., just added a new terminal server) do not get updated on the branch RODC either. Is there something simple I'm missing? How do I get the RODC to sync its AD-integrated DNS records with the rest of my world? Thank you in advance for your responses. Mike

    Read the article

  • Can I edit the snapshots of websites on most visited sites page in Chrome?

    - by arik
    I saw the previous chain of discussion, but did not find a way to continue the thread - I have found the "Preference" (I have Windows 7), but in that, I did not find clearly what to modify. I did find a section called 'URLs pinned" or something like this, but it did NOT match fully the ones I have. I have activated the 'profile sync' for Chrome - don't know if it has any effect. Can you manually edit the icons of the most visited sites, for the new tab page in Chrome? When you open a new tab in Chrome, I get the 'new page' tab, where I selected the 'most popular', so I have 8 icons with the 8 most popular sites I visited; I can also pin any one of those I see on screen, such that they remain 'permanent' there. We are missing one which would point to, say, "hotmail". So, I was looking for a way to add 'hotmail' to be one of the 8. (and, by the way, we ticked the 'X' on one of those 8, and now it remains grey / shows nothing in it). So, my double-question: How can I add a URL of my choice into one of those 8 spaces? How can I restore usage of the last one?

    Read the article

  • time sync with ntpd

    - by guthrie
    I run Debian on several systems, and their times do not seem to stay in sync. I can run ntpdate manually, but I thought that I should have an ntpd running that would automate that. I did check with apt and apt-cache but don't find any ntpd (or associated ntpq), not any such names in my system (locate...), but ntp-doc does still describe them. Looking around I see that there is an ntpdate-debian command, and it uses /etc/default/ntpdate for servers (instead of the standard /etc/ntp.conf), but even thought that file is there and has "yes" indicated to use ntp.conf, it fails with "no servers can be used", although ntpdate works fine. Is this just a layer over ntpdate, any reason to use it instead? So, why are they missing, do I need them, how do I automate time updates? Associated, two of my machines are virtualized on a MSoft VM, how is it that their clocks drift, and both to different values? (The underlying Windows machine clock seems stable). I see a few old notes about time & ntp problems on VMware, didn't find anything either current or relating to MSoft VMs. Anything I did see says just to use ntpd, but as above, ...?!

    Read the article

  • PHP extension causes symbol lookup error

    - by Christian
    Dear, I installed - or better tried to - the NMCryptGate Extension for PHP on my Debian 5.0.8 server. I did this by compiling the sources which came up with no error message. Calling phpinfo() I can see the extension as enabled. BUT, whenever I try calling a method from this extension I get an error logged to the apache error log: /usr/sbin/apache2: symbol lookup error: /usr/lib/php5/20060613+lfs/nmcryptgate.so: undefined symbol: nmlistalloc What is missing? I got two packages from the software company: the php module sources and some files which should - according to their path inside the tar - go to /usr/local/bin|doc|include|lib. I moved them there without any effect. Each of these two packages has its own config file almost looking the same: \# libnmcryptgate.la - a libtool library file \# Generated by ltmain.sh - GNU libtool 1.3.4 (1.385.2.196 1999/12/07 21:47:57) \# \# Please DO NOT delete this file \# It is necessary for linking the library \# The name that we can dlopen(3) dlname='' \# Names of this library library_names='libnmcryptgate.so.1 libnmcryptgate.so libnmcryptgate.so' \# The name of the static archive old_library='' \# Libraries that this one depends upon dependency_libs=' -L. -L/usr/ssl/lib -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto' \# Version information for libnmcryptgate current=1 age=0 revision=29 \# Is this an already installed library installed=yes \# Directory that this library needs to be installed in libdir='/usr/local/lib' I tried several ways to get it right: moving files, symlinking, changing configurations - always followed by restarting apache - no success. I guess I just have to move the files to the correct location or change the libdir inside the config files but meanwhile I'm totally confused by the two packages: do I need both, which config rules what, do I have to use the libdir variable? And for what? ... Anybody out there hinting me to my source of failure? Thank you in advance, regards, Christian

    Read the article

  • Use to host email for a domain name that wasn't our primary domain name

    - by drpcken
    Exchange 2007 on an Server 2003 active directory. My primary domain (MyMainDomain.com) controller also hosts dns and dhcp. I have a secondary domain name (MySecondDomain.net) that my Exchange Server allows emails from. It wasn't a physical domain, just accepted by exchange and setup as the Active Directory user's main smtp and outgoing address. Its MX records point to MyMainDomain.com's public exchange address. I've taken MySecondDomain.net and move the mail boxes to a hosted exchange 2010 environment. MX records now point to this new exchange system and when I send and email OUTSIDE the MyMainDomain.com environment (say gmail) it works and sends to the hosted exchange setup for MySecondDomain.net. however when I send an email from a user on MyMainDomain.com, it goes to the old exchange 2007 server I am hosting internally. I have removed MySecondDomain.net from the allowed domains, removed the DNS zone for MySecondDomain.net, and cleared DNS cache. I was convinced it was my internal dns server but I've cleared the DNS cache. Is there something I'm missing somewhere in exchange 2007? Or is it my domain controller/dns? Sorry if this is confusing. Thank you!

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >