Search Results

Search found 5636 results on 226 pages for 'abcs extension'.

Page 174/226 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • How to install RMagick RubyGem on Mac OS X 10.6 Snow Leopard?

    - by misbehavens
    I am getting this error while trying to install RMagick: $ sudo gem install rmagick Building native extensions. This could take a while... ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... no Can't install RMagick 2.13.1. Can't find Magick-config in /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin:~/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out How can I install the RMagick RubyGem on Snow Leopard?

    Read the article

  • Install self-signed certificate on local server (iis)

    - by ile
    On this page there are instructions on how to create self-signed cert (on apache) and how to install this certificate on server. I found this page (http://www.visualwin.com/SelfSSL/) with instructions on how to create self-signed certificate on windows (iis). I followed instructions and when I type https://myip/myapp (this leads to localhost because I set my router's port forwarding to go to localhost on my pc) this part works. From the first link, the most important part is this: What needs to be installed in IE is actually the Root CA Certificate. In the how-to above, the Root CA Certificate is called ca.crt. Copy this file to the server that is running QuickBooks. The following is for IE6: - Open IE - Tools - Internet Options - Content - Certificates - Trusted Root Certification Authorities Tab - Import, Next, Browse to 'ca.crt' - Next, Next, Finish, Close, OK The part that is missing in second link is that there is no instruction on how to get .crt file, so I tried to get it myself. What I did was following: I opened https://myip/myapp in Firefox and then "This Connection is Untrusted" screen appeared. Then I clicked on "Add Exception" and then below "Certificate Status" I clicked "View". Under the Details tab I clicked on Export and choosed Save as type: "X 509 Certificate (PEM)" and file was saved with .crt extension. Then I opened IE8 and followed above instructions. After opening https://myip/myapp in IE8 I always get warning screen. Does anyone knows what am I doing wrong? Thanks, Ile

    Read the article

  • Cannot send email outside of network using Postfix

    - by infmz
    I've set up an Ubuntu server with Request Tracker following this guide (the section about inbound mail would be relevant). However, while I'm able to send mail to other users within the network/domain, I cannot seem to reach beyond - such as my personal accounts etc. Now I have no idea what is causing this, I thought that all it takes is for the system to fetch mail through our exchange server and be able to deliver in the same way. However, that hasn't been the case. I have found another server setup in a similar fashion (CentOS 5, Request Tracker but using Sendmail), however it is a dated server and whoever's built it has kindly left no documentation on how it works, making it a pain to use that as a reference system! :) At one point, I was told I need to set up a relay between the local server's email add and our AD server but this didn't seem to work. Sorry, I know next to nothing about mailservers, my colleagues nothing about Linux so it's a hard one for me. Thank you! EDIT: Result of postconf -N with details masked =) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = myhost.mydomain.com, localhost.mydomain.com, , localhost myhostname = myhost.mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = EXCHANGE IP smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Sample log message: Sep 4 12:32:05 theedgesupport postfix/smtp[9152]: 2147B200B99: to=<[email protected]>, relay= RELAY IP :25, delay=0.1, delays=0.05/0/0/0.04, dsn=5.7.1, status=bounced (host HOST IP said: 550 5.7.1 Unable to relay for [email protected] (in reply to RCPT TO command))

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • VNC error: "Could not connect to session bus: Failed to connect to socket"

    - by GJ
    I started a vncserver on display :1 on an ubuntu machine. When I connect to it, I get a grey X window with an error message Could not connect to session bus: Failed to connect to socket. The vnc log is: Xvnc Free Edition 4.1.1 - built Apr 9 2010 15:59:33 Copyright (C) 2002-2005 RealVNC Ltd. See http://www.realvnc.com for information on VNC. Underlying X server release 40300000, The XFree86 Project, Inc Sun Mar 20 15:33:59 2011 vncext: VNC extension running! vncext: Listening for VNC connections on port 5901 vncext: created VNC server for screen 0 error opening security policy file /etc/X11/xserver/SecurityPolicy Could not init font path element /usr/X11R6/lib/X11/fonts/Type1/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/Speedo/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/misc/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/75dpi/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/100dpi/, removing from list! cat: /var/run/gdm/auth-for-link2-eGnVvf/database: No such file or directory gnome-session[24880]: WARNING: Could not make bus activated clients aware of DISPLAY=:1.0 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of SESSION_MANAGER=local/dell:@/tmp/.ICE-unix/24880,unix/dell:/tmp/.ICE-unix/24880 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused Sun Mar 20 15:34:10 2011 Connections: accepted: 0.0.0.0::51620 SConnection: Client needs protocol version 3.8 SConnection: Client requests security type VncAuth(2) VNCSConnST: Server default pixel format depth 16 (16bpp) little-endian rgb565 VNCSConnST: Client pixel format depth 16 (16bpp) little-endian rgb565 gnome-session[24880]: Gtk-CRITICAL: gtk_main_quit: assertion `main_loops != NULL' failed gnome-session[24880]: CRITICAL: dbus_g_proxy_new_for_name: assertion `connection != NULL' failed Any ideas how to fix it?

    Read the article

  • OS X Apache giving 503 error for anything in /api directory

    - by WilliamMayor
    I have a locally hosted website that uses Smarty templates, I'm trying to get started on building an API for the site. I've used virtualhost.sh to create a local virtual host for this and other sites. I've discovered that if I put a directory called api at the root of any of these virtual hosts I will get a 503 error when I try to access anything inside. I am using mod-rewrite but so far only to append a .php extension when needed. Here are the error logs for a request: [Thu Feb 09 13:42:37 2012] [error] proxy: HTTP: disabled connection for (localhost) [Thu Feb 09 13:49:06 2012] [error] (61)Connection refused: proxy: HTTP: attempt to connect to [fe80::1]:8080 (localhost) failed [Thu Feb 09 13:49:06 2012] [error] ap_proxy_connect_backend disabling worker for (localhost) The middle line gave me a clue to look in my hosts file because why would a request go to [fe80::1]:8080? I commented out that line and tried again, this time the error was in connecting to the standard 127.0.0.1 localhost. I have concluded that perhaps there is some config file somewhere picking up the underlying request of localhost/api and pointing it somewhere other than my virtual host. At this point my ability to fix the problem fails me. Can anyone help?

    Read the article

  • None of usb-controllers are working after a crash

    - by Cray
    So I have a GA-p35-DS4 mboard with a Q6600, running windows7-x64. After a random crash with a bluescreen (not caused by anything particular as I recall), none of the usb-controllers are working. (And none of the devices connected to those usb ports). All the controllers are showed up in the device manager, but every one has a warning-icon (as they are not functioning properly). The windows identifies them correctly, it shows exactly the model of each controller, and it says that the driver is installed. Now, when I try to reinstall the driver (Update driver menu item in the device manager), it tries to find it, finds the driver, but quits saying that the driver was found but could not be installed. Additionally, it displays "This operation requires an interactive window station", whatever that means... Now get this, the same thing happens with a new pci-usb controller card! It is found (actually each time the machine starts, it is found as new hardware), but trying to install drivers leads nowhere, I am getting the same message about an interactive window station. (tried to install drivers from the acoompanying CD) I have tried deleting those devices from device manager and let windows find them again, but that leads to same results. None of the ports on this extension card work either. This should not be a hardware problem, a usb keyboard connected to the builtin usb controller works in bios, and even in another OS running the same computer (old xp installation) What to do? Will I have to reinstall? Can deleting infcache.1 help? Is there some way to let windows remove all old drivers and try to find all hardware again, this time only looking for drivers on a windows install disk or something?

    Read the article

  • What's wrong with this HTTP POST request?

    - by bigboy
    I'm trying to fuzz a server using the Sulley fuzzing framework. I observe the following stream in Wireshark. The error talks about a problem with JSON parsing, however, when I try the same HTTP POST request using Google Chrome's Postman extension, it succeeds. Can anyone please explain what could be wrong about this HTTP POST request? The JSON seems valid. POST /restconf/config HTTP/1.1 Host: 127.0.0.1:8080 Accept: */* Content-Type: application/yang.data+json { "toaster:toaster" : { "toaster:toasterManufacturer" : "Geqq", "toaster:toasterModelNumber" : "asaxc", "toaster:toasterStatus" : "_." }} HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Content-Type: */* Transfer-Encoding: chunked Date: Sat, 07 Jun 2014 05:26:35 GMT Connection: close 152 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <errors xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf"> <error> <error-type>protocol</error-type> <error-tag>malformed-message</error-tag> <error-message>Error parsing input: Root element of Json has to be Object</error-message> </error> </errors> 0

    Read the article

  • Cannot perform a PECL installation

    - by Petrusa
    I have been trying to do a few PECL installations, but all of them return the same type of error. Something related to timezones? Im running RedHat x86_64 es5. Attempting to install geoip-1.0.7: root@server [~]# pecl install geoip-1.0.7 downloading geoip-1.0.7.tgz ... Starting to download geoip-1.0.7.tgz (9,416 bytes) .....done: 9,416 bytes Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in PEAR/Validate.php on line 489 Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in /usr/local/lib/php/PEAR/Validate.php on line 489 3 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/geoip-1.0.7 running: /root/tmp/pear/geoip/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ERROR: `/root/tmp/pear/geoip/configure' failed What is going on? Anyone that could assist please...

    Read the article

  • Compiled ruby fails to find curses

    - by Hamish Downer
    I'm attempting to install the sup MUA but I'm having trouble. When I try to run it, it can't find curses: /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- curses (LoadError) ... I am installing on a server running CentOS 5. I have compiled ruby and rubygems from source, and then installed sup using rubygems. I followed this article to compile ruby. I have found having a similar problem on ubuntu. The fix suggested there is to install libcurses-ruby, but I can't find a similarly named package in CentOS. I have installed the ncurses-devel package, as that was required for installing sup using gem. I have also installed the ncurses, cursesx and rbcurse gems, but none of these have fixed the problem. The article above about compiling ruby said you had to recompile the zlib extension, after doing: cd ext/zlib sudo ruby extconf.rb --with-zlib-include=/usr/include --with-zlib-lib=/usr/lib cd ../.. sudo make sudo make install So I've tried a few variants in ext/curses. The top few lines of ext/curses/extconf.rb are require 'mkmf' dir_config('curses') dir_config('ncurses') dir_config('termcap') So I've tried a few variants of setting paths: sudo ruby extconf.rb --with-curses-include=/usr/include --with-curses-lib=/usr/lib --with-ncurses-include=/usr/include --with-ncurses-lib=/usr/lib --with-termcap-lib=/lib sudo ruby extconf.rb --with-curses-include=/usr/include --with-curses-lib=/usr/lib --with-termcap-lib=/lib and re-doing the make, but to no avail as yet. Any ideas to move it forward are welcome.

    Read the article

  • printing parallel over ethernet cable

    - by Crudler
    I have a bit of an interesting challenge :) I have a machine with a parallel printer output, i want it to be able to print instead to a printer in a different room and i know that parallel isnt great over big distances. i found this: http://www.amazon.com/over-Cat5-Extension-Cable-Adapter/dp/B002WJ9S6Y%3FSubscriptionId%3DAKIAINHICTCYYZGJWT4Q%26tag%3Dusbprintercables.net-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB002WJ9S6Y which will let me connect over cat5, but its usb to cat 5. my machine can only output on parallel (its not a computer) so what i was thinking of getting is a parallel(f) to usb and usb to parallel (M) for either side i.e. machine - parallel - usb - cat5 - usb - parallel -printer just seems a bit messy :) suggestions? another thing i would like to try is to get rid of the old school parallel printer and instead use a network based multi function. would this be possible? i.e. machine - parallel -usb - cat5 - ethernet print server - network printer this might be rougher because the machine cannot "know" that we are using a network printer. it can ONLY print to LPT1 Thanks!

    Read the article

  • Cyrus: In practical terms, how do end users administer their shared mailboxes?

    - by Nick
    Let's say we have four customer service reps: Billy, Bob, Joe, and Tom. Tom is the department manager. There's a shared Customer Service mailbox on the Cyrus server that they all have access to. Tom, as the manager also has administrative privileges for the shared mailbox. They decide they want to create sub-folders a certain way, and Tom creates them. They're all running Thunderbird, so Tom right-clicks the main folder and chooses "New Subfolder". Now Tom has the Subfolders he needs and the other sales reps have... nothing! Because Cyrus created the Subfolders giving Tom "Full Access" permissions, and everyone else gets no access. So how does Tom give the other reps in his department access to the new folders? As far as Cyrus is concerned, Tom has permission to grant others access to his new mailboxes- But as far as I can tell, there's no option in Thunderbird for granting mailbox permissions. An IT staff member should not have to receive a support request every time someone wants to add a Subfolder to a shared mailbox. That's why we make certain users into mailbox admins in the first place! But asking (non-technical) users to SSH into an IMAP server to run cyradm seems like a bad idea too. Certainly someone has found a solution for this dilemma. Perhaps a Thunderbird extension for setting Cyrus permissions? Or something like umask that forces subfolders to have identical permissions to their parents on creation? And related, what about Sieve configuration? Is there anyway that can be done from the client machine too? Thanks, Nick

    Read the article

  • How can I connect integrated webcam with virtualbox

    - by Mike Stumpf
    I am trying to use a Windows XP VM for VirtualBox on my Windows 8.1 laptop. I have tried the usual attaching USB device but I get an error saying "USB device is busy with previous request". My webcam is not active in any applications and this happens after a clean reboot of the host, the guest, and VirtualBox. Here are the details: Host -HP Pavilion 17 Notebook PC (stock) -Windows 8.1 -AMD A10-5750M APU -HP Truevision HD (integrated webcam) VM I got the VM here: http://www.modern.ie/en-us/virtualization-tools VirtualBox -VirtualBox 4.3.12 installed -VirtualBox Extension pack installed -Guest additions are installed for 4.3.12 -Enable USB Controller is checked -It does not matter if enable 2.0 controller is checked or not -It does not matter if a USB device filter is set up for the webcam or not -Here is the error message: Failed to attach the USB device DDFEQ01G45BFBV HP Truevision HD [0004] to the virtual machine IE8 - WinXP. USB device 'DDFEQ01G45BFBV HP Truevision HD' with UUID {7a2e2a45-974d-482b-9b4e-9f9abbcd0ebb} is busy with a previous request. Please try again later. Result Code: E_INVALIDARG (0x80070057) Component: HostUSBDevice Interface: IHostUSBDevice {173b4b44-d268-4334-a00d-b6521c9a740a} Callee: IConsole {8ab7c520-2442-4b66-8d74-4ff1e195d2b6} I read on some VirtualBox forums that disabling USB 2.0 support in the host BIOS solved their issue but I wanted to know if there were any other ideas before I muck around in there. Thanks

    Read the article

  • memcache fast-cgi php apache 2.2 windows 7 creating problems

    - by Ahmad
    hi, i am trying to run memcache, fast-cgi with apache 2.2 + php on a windows 7 machine. if i dont use memcache everything works fine. the moment i disable extension=php_memcache.dll in php.ini everything returns to normal. once i start apache, the apache logs say: [Wed Jan 12 18:19:23 2011] [notice] Apache/2.2.17 (Win32) mod_fcgid/2.3.6 configured -- resuming normal operations [Wed Jan 12 18:19:23 2011] [notice] Server built: Oct 18 2010 01:58:12 [Wed Jan 12 18:19:23 2011] [notice] Parent: Created child process 412 [Wed Jan 12 18:19:23 2011] [notice] Child 412: Child process is running [Wed Jan 12 18:19:23 2011] [notice] Child 412: Acquired the start mutex. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting 64 worker threads. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting thread to listen on port 80. and after accessing the page [the page just has echo phpinfo()]. i get this error in the error.log [Wed Jan 12 18:20:54 2011] [warn] [client 127.0.0.1] (OS 109)The pipe has been ended. : mod_fcgid: get overlap result error [Wed Jan 12 18:20:54 2011] [error] [client 127.0.0.1] Premature end of script headers: index.php i have php_memcache.dll in my ext directory and httpd.conf is like this: LoadModule fcgid_module modules/mod_fcgid.so FcgidInitialEnv PHPRC "c:/php" FcgidInitialEnv PATH "c:/php;C:/WINDOWS/system32;C:/WINDOWS;C:/WINDOWS/System32/Wbem;" FcgidInitialEnv SystemRoot "C:/Windows" FcgidInitialEnv SystemDrive "C:" FcgidInitialEnv TEMP "C:/WINDOWS/Temp" FcgidInitialEnv TMP "C:/WINDOWS/Temp" FcgidInitialEnv windir "C:/WINDOWS" FcgidIOTimeout 64 FcgidConnectTimeout 32 FcgidMaxRequestsPerProcess 500 <Files ~ "\.php$>" AddHandler fcgid-script .php FcgidWrapper "c:/php/php-cgi.exe" .php </Files> so the problem has to be related to memcache coz if i disable it, fast-cgi seems to be working fine. any possible reasons for this?? the memcache service is running.. i can check it through control panel-services

    Read the article

  • Asterisk terminating outbound call when picked up, sends 'BYE' message

    - by vo
    I'm running Asterisk 1.6.1.10 / FreePBX 2.5.2.2 and I've got an outbound trunk setup. Everything use to work fine until recently (perhaps due to upgrade to FC12 or other things I'm not sure). Anyway the setup does not appear to have issues registering and setting up the call, RTP packets go both ways and you can hear the ringing from the other side. However it appears that when the call is picked up or thereabouts, the incoming RTP packets cease. Upon closer inspection with Wireshark, there are these particular packets that seem to be the cause: trunk->asterisk SIP/SD Status: 200 OK, with session description asterisk->trunk SIP Request: ACK sip:<phone>@trunk:6889 asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 [..about a dozzen RTP packets in/outbound..] trunk->asterisk SIP Status: 200 OK, CSeq: 104 Bye [..outbound RTP continues, phone is silent..] Then the inbound RTP packets cease, however the asterisk logs dont show any activity at this point. The last entry reads 'SIP/ is answered SIP/'. Then when you hangup the extension, you get asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 trunk->asterisk SIP Status: 481 Call Leg/Transaction does not exist My trunk peer settings in FreePBX are: username=<user> fromuser=<user> canreinvite=no type=friend secret=<pass> qualify=no [qualify yes produces 401/forbidden messages] nat=yes insecure=very host=<sip trunk gateway> fromdomain=<sip trunk gateway> disallow=all context=from-pstn allow=ulaw dtmfmode=inband Under sip_general_custom.conf i have stunaddr=stun.xten.com externrefresh=120 localnet=192.168.1.1/255.255.255.0 nat=yes Whats causing Asterisk to prematurely end the call and still think the call is in progress? I have no idea where to look next.

    Read the article

  • Transfering call asterisk to different context

    - by Necronet
    I have a Small and basic PBX, and with two contexts wich basicly are sales and supervisor both have different roles and privileges. I notice that it is possible to transfer call from the same context but it have been imposible to transfer anything to another context. Any insight, i am kinda a rookie on asterisk but currently there is no one else in charge... Thanks Edit This is the extension.conf [supervisor] include => from-internal exten => _40XX,1,Answer exten => _40XX,n,Set(calltime=${STRFTIME(${EPOCH},,%C%y%m%d.%H.%M.%S)}) exten => _40XX,n,Set(CALLEDNUMBER=${EXTEN}) exten => _40XX,n,MixMonitor(/tmp/Para_${CALLEDNUMBER}-${calltime}-De_${CALLERID(num)}.wav) exten => _40XX,n,Dial(SIP/${EXTEN},40,TtRr) exten => _40XX,n,Hangup [sales] include => out-trunksip exten => _41XX,1,Answer exten => _41XX,n,Set(calltime=${STRFTIME(${EPOCH},,%C%y%m%d.%H.%M.%S)}) exten => _41XX,n,Set(CALLEDNUMBER=${EXTEN}) exten => _41XX,n,MixMonitor(/tmp/Para_${CALLEDNUMBER}-${calltime}-De_${CALLERID(num)}.wav) exten => _41XX,n,Dial(SIP/${EXTEN},40,TtRr) exten => _41XX,n,Hangup and the sip.conf looks like this: [supervisor] username=sales secret=ASUPERSECRETPASSWORD type=peer ..... context=supervisor mailbox=supervisor [sales] username=sales secret=ASUPERSECRETPASSWORD type=peer ..... context=sales mailbox=sales What do you suggest in order to get the supervisor with the same privileges that he already has and the sales been able to transfer calls to him

    Read the article

  • What is stopping postfix from delivering mail to the local transport agent?

    - by Dark Star1
    I have the following settings ( as grabbed from my postconf -n output) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = $mydomain, localhost.$mydomain, localhost myhostname = //redacted mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_hard_error_limit = 12 smtpd_recipient_limit = 10 smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf, mysql:/etc/postfix/mysql_virtual_alias_domainaliases_maps.cf virtual_gid_maps = static:8 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_domains_maps.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf, mysql:/etc/postfix/mysql_virtual_mailbox_domainaliases_maps.cf virtual_transport = virtual virtual_uid_maps = static:5000 postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_overquota_bounce=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_limit_maps=mysql:/etc/postfix/mysql_virtual_mailbox_limit_maps.cf postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_maildir_limit_message=Sorry, the your maildir has overdrawn your diskspace quota, please free up some of spaces of your mailbox try again. postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_create_maildirsize=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_extended=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_limit_override=yes postconf: warning: /etc/postfix/main.cf: unused parameter: smtpd_relay_restrictions=reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unauth_destination, check_policy_service inet:127.0.0.1:10023, permit I am nwe to mail server configurations but as I understand it from this message: status=deferred (mail transport unavailable) It means it can't deliver to the LDA. I am using postifx 2.9.6 on ubuntu 12.04 with dovecot 2.0.19

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Should I use Evernote or Org-mode for taking notes?

    - by tobeannounced
    I am looking for an app that will help me manage my notes, and after coming across Org-mode, I was wondering whether Org-mode's functionality is strong enough that it can remove the need for me to use another note taking app (because org is more of a task management app), such as Evernote. My wishes for a note taking app are: can be accessed offline in some form, eg through an iPhone app or desktop client Org-Mode and Evernote can both do this, however it seems like MobileOrg is more aimed at tasks, rather than notes? If this is the case, I probably would use Evernote in addition to MobileOrg. I can clip web content into easily for research Evernote has the browser extension, how is it with Org-Mode? I know I can use c-c c-l, but how suited is it really for taking notes on stuff I am browsing in Chrome/Firefox? has voice notes on the iPhone and computer too, if possible Org-Mode cannot do this on the iPhone, on the computer could I record audio externally and then link the files in? I can add notes too on my iPhone & computer while not connected to the internet both can do this. The types of notes I am likely to have include: howtos/things I have learnt, documentation on my setup/stuff, research on things I may do in the future, ideas, and task specific notes. I have thought about where I would want to access each of these notes and will post that here if you think it would help. So, is Org-mode strong enough in note-taking and the requirements I listed that I can avoid the need to use a separate tool for taking notes?

    Read the article

  • TFS 2012: Backup Plan Fails with empty log file

    - by Vitor
    I have a Team Foundation Server 2012 installation with Power Tools, and I defined a backup plan using the wizard found in the "Database Backup Tools" in the Team Foundation Server Administration Console. I set the backup plan to do a full database backup on Sunday mornings, to another server in the network. I followed the wizard with no problems and the Backup Plan was set successfully. However when the backup runs it returns Error as result and when I go to the log file I only get the header and no further info: [Info @01:00:01.078] ==================================================================== [Info @01:00:01.078] Team Foundation Server Administration Log [Info @01:00:01.078] Version : 11.0.50727.1 [Info @01:00:01.078] DateTime : 11/25/2012 02:00:01 [Info @01:00:01.078] Type : Full Backup Activity [Info @01:00:01.078] User : <backup user> [Info @01:00:01.078] Machine : <TFS Server> [Info @01:00:01.078] System : Microsoft Windows NT 6.2.9200.0 (AMD64) [Info @01:00:01.078] ==================================================================== I can imagine it's a permission problem, but I have no idea where to start ... Can anyone help? Thank you for your time! EDIT I'm not sure if it is related, but I logged in with "backup user" in "TFS Server" and there was this crash window opened with "TFS Power Tool Shell Extension (TfsComProviderSvr) has stopped working". The full crash log is here: Problem signature: Problem Event Name: APPCRASH Application Name: TfsComProviderSvr.exe Application Version: 11.0.50727.0 Application Timestamp: 5050cd2a Fault Module Name: StackHash_e8da Fault Module Version: 6.2.9200.16420 Fault Module Timestamp: 505aaa82 Exception Code: c0000374 Exception Offset: PCH_72_FROM_ntdll+0x00040DA8 OS Version: 6.2.9200.2.0.0.272.7 Locale ID: 1043 Additional Information 1: e8da Additional Information 2: e8dac447e1089515a72386afa6746972 Additional Information 3: d903 Additional Information 4: d9036f986c69f4492a70e4cf004fb44d Does it help? Thanks everyone!

    Read the article

  • postfix smtpd rejecting mail from outside network match_list_match: no match

    - by Loopo
    My postfix (V: 2.5.5-1.1) running on ubuntu server (9.04) started to reject mail arriving in from outside about 2 weeks ago. Doing a "manual" session via telnet shows that the connection is always closed after the MAIL FROM: [email protected] line is input, with the message "Connection closed by foreign host." Doing the same from another client inside the LAN works fine. In the log files I get the line "lost connection after MAIL from xxxxx.tld[xxx.xxx.xxx.xxx]" This is after some lines like: match_hostaddr: XXX.XXX.XXX.XXX ~? [::1]/128 match_hostname: XXXX.tld ~? 192.168.1.0/24 ... match_list_match: xxx.xxx.xxx.xxx: no match which seem to suggest some kind of filter which checks for allowed addresses. I have been unable to locate where this filter lives, or how to turn it off. I'm not even sure if that's what's causing my problem. Connections from inside the LAN don't get disconnected even though they also show a "match_list_match: ... no match" line. I didn't change any configuration files recently, below is my main.cf as it currently stands. I don't really know what all the parameters do and how they interact. I just set it up initially and it worked fine (up to recently). smtpd_banner = $myhostname ESMTP $mail_name (GNU) biff = no readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/server.crt smtpd_tls_key_file=/etc/ssl/private/server.key #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_sasl_auth_enable = no smtp_use_tls=no smtp_sasl_password_maps = hash:/etc/postfix/smtp_auth myhostname = XXXXXXX.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = XXXX.XXXX.com, XXXX.com, localhost.XXXXX.com, localhost relayhost = XXX.XXX.XXX.XXX mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/24 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_sasl_local_domain = #smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_ when checking the process list, postfix/smtpd runs as smtpd -n smtp -t inet -u -c -o stress -v -v Any clues?

    Read the article

  • How to get multipath working for Ubuntu Server 12.04

    - by mlampi
    I'm working on a project which aims to make use of Ubuntu servers running on enterprise class hardware. In our case that means IBM HS23E blade servers, QLogic 4GB fibre channel extension cards and quite old IBM DS4500 disk array with two controllers. At the moment we have fibre channel as only boot option and Ubuntu Server 12.04 installed just fine and is able to boot without multipath. I'm not a linux professional myself but in our team we have people who will understand the technical stuff. Don't let my post confuse :) The current situation is that we have only one fibre channel connection to a single disk array controller. Real life case would be of course quite different. At minimum we should have two fibre channel ports connected to two different switches and two different controllers. However, we have no idea how to set up multipath tool. Is the DM-MPIO the right software? At minimum we should be able to boot when multiple connections are available and achieve fault tolerance when any of them should be down. Since the disk array is not the latest hardware, I managed to find RDAC driver sources only for 2.6.x kernel. And we have 3.2.x. Another issue is to build a multipath.conf. The said driver sources are from IBM support and the QLogic drivers provided to Ubuntu installer are from Ubuntu site. It seems that RHEL and SLES would have near out of the box support but that is not an option for our project. Actual questions: - What is the recommended software tool for multipath for Ubuntu Server 12.04? - Is there available pre-made configurations or templates? Does it require disk array / controller specific settings or do a more generic config work? - Do you have expriences on similar setup and like to share the knowledge? I'll provide you with any additional information you might require. Thanks in advance.

    Read the article

  • Procmail Postfix issue

    - by Blucreation
    Our server is using CENTOS uses postfix: Nov 1 11:31:52 webserver postfix/smtpd[30424]: 822A91872F: client=unknown[5.133.168.42], sasl_method=PLAIN, [email protected] Nov 1 11:31:52 webserver postfix/cleanup[30427]: 822A91872F: message-id=<[email protected]> Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: from=<[email protected]>, size=620, nrcpt=1 (queue active) Nov 1 11:31:52 webserver postfix/virtual[30505]: 822A91872F: to=<[email protected]>, relay=virtual, delay=0.12, delays=0.12/0/0/0, dsn=2.0.0, status=sent (delivered to maildir) Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: removed Nov 1 11:31:52 webserver postfix/smtpd[30424]: disconnect from unknown[5.133.168.42] I have this in my etc/postfix/main.cf: mailbox_command = /usr/bin/procmail -a "$EXTENSION" My etc/procmailrc contains: PATH="/usr/bin" SHELL="/bin/bash" LOGFILE="/var/log/procmail.log" VERBOSE="YES" LOG="#TEST#" I don't think procmail is picking up on my procmailrc as nothing ever gets logged from normal emails. If i type this: procmail DEFAULT=/dev/null VERBOSE=yes LOGFILE=/var/log/procmail.log /dev/null </dev/null I get entries in my log file so i know procmail is working Am i doing something wrong? am i missing something? I eventually want my rule to call a php script only if the subject contains "SUPPORT TICKET" and the to is "[email protected]" but that's once i this issue solved.

    Read the article

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >