Search Results

Search found 23084 results on 924 pages for 'jeff main'.

Page 627/924 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • therubyracer on Ubuntu install issues with ruby-2.0.0-p247

    - by Victor S
    I can't seem to get therubyracer gem installed on ubuntu, i get the following errors, anyone can help? ERROR: Error installing therubyracer: ERROR: Failed to build gem native extension. /home/victorstan/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for main() in -lpthread... yes creating Makefile make compiling constraints.cc compiling array.cc compiling value.cc compiling invocation.cc compiling primitive.cc compiling trycatch.cc compiling context.cc compiling exception.cc compiling template.cc compiling accessor.cc compiling object.cc compiling script.cc compiling external.cc compiling stack.cc compiling gc.cc compiling backref.cc compiling heap.cc compiling v8.cc compiling constants.cc compiling date.cc compiling function.cc compiling rr.cc compiling message.cc compiling init.cc compiling string.cc compiling handles.cc compiling signature.cc compiling locker.cc linking shared-object v8/init.so g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_base.a: No such file or directory g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_snapshot.a: No such file or directory make: *** [init.so] Error 1 Update It seems it's trying to use a 32 bit library instead of a 64 bit library, anyone else experience issue trying to install 64 bit applications and packages on Ubuntu, and it trying to install 32 bit version instead (even though the system is a 64 Linux)? Related: https://github.com/cowboyd/therubyracer/issues/262

    Read the article

  • How to reset Bash on Mac OSX, .bash_profile corrupted and bash no longer works

    - by user1463172
    I am on a MacBook Pro, running the latest version of Mountain Lion. I really need some help, I have managed some how to damage my .bash_profile (I think) so that every time I open up the terminal I get the error listed below. -bash: export: `/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/go/bin': not a valid identifier -bash: export: `/Users/rob/Applications/sbt/bin:': not a valid identifier env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found I am not sure what has happened, I have no sudo, cd or any normal commands. The only way I have been able to get to any of the main directories is through the go to folder command in finder and try to find the file to no avail. To top it all off I think I created a file that might be causing the issue, I wanted to edit the .bash_profile so I typed sudo nano ./bash_profile This open a new file in nano which I think was then saved. After this I opened the real .bash_profile to add in the path for node.js. If I can get to the .bash_profile I think I can get it back on track but I can't find it, should I reinstall bash? If so how would I do that on a mac, I tried using brew install bash to which I get -bash: brew: command not found I am really stuck if anyone can help I would be really appreciated. Many thanks

    Read the article

  • Configure Postfix to send/relay emails Gmail (smtp.gmail.com) via port 587

    - by tom smith
    Hi. Using Centos 5.4, with Postfix. I can do a mail [email protected] subject: blah test . Cc: and the msg gets sent to gmail, but it resides in the spam folder, which is to be expected. My goal is to be able to generate email msgs, and to have them appear in the regular Inbox! As I understand Postfix/Gmail, it's possible to configure Postfix to send/relay mail via the authenticated/valid user using port 587, which would no longer have the mail be seen as spam. I've tried a number of parameters based on different sites/articles from the 'net, with no luck. Some of the articles, actually seem to conflict with other articles! I've also looked over the stacflow postings on this, but i'm still missing something... Also talked to a few people on IRC (Centos/Postfix) and still have questions.. So, i'm turning to Serverfault, once again! If there's someone who's managed to accomplish this, would you mind posting your main.cf, sasl-passwd, and any other conf files that you use to get this working! If I can review your config files, I can hopefully see where I've screwed up, and figure out how to correct the issue. Thanks for reading this, and any help/pointers you provide! ps, If there is a stackflow posting that speaks to this that I may have missed, feel free to point it out to me! -tom

    Read the article

  • Fix ACPI DSDT of Amilo Pa 1538

    - by kayahr
    I have a Fujitsu-Siemens Amilo Pa 1538 laptops and installed Ubuntu 10.04 on it (Kernel 2.6.32). Main problem is that the fans of the notebook are always on even when the temperature is low. Another problem is that the display brightness can not be adjusted. Since some years I use Dell laptops and never experienced any ACPI problems so I thought that it is no longer needed to fix ACPI tables but now I have this crap of a laptop and I think I have to do it. Unfortunately I need some help repairing the DSDT. The dsdt.dat and dsdt.dsl of the laptop can be found here: http://www.ailis.de/~k/permdata/20100420/dsdt/ Compiling the DSDT gives the following output: # iasl -sa dsdt.dsl Intel ACPI Component Architecture ASL Optimizing Compiler version 20090521 [Jun 30 2009] Copyright (C) 2000 - 2009 Intel Corporation Supports ACPI Specification Revision 3.0a dsdt.dsl 81: Method (\_WAK, 1, NotSerialized) Warning 1080 - ^ Reserved method must return a value (_WAK) dsdt.dsl 207: Method (_L10, 0, NotSerialized) Warning 1087 - ^ Not all control paths return a value (_L10) dsdt.dsl 2861: Method (NVIF, 3, NotSerialized) Warning 1087 - ^ Not all control paths return a value (NVIF) dsdt.dsl 4551: Store (\_SB.PCI0.LPC0.PMRD (0xFA), Local0) Warning 1099 - Statement is unreachable ^ ASL Input: dsdt.dsl - 4962 lines, 162828 bytes, 2300 keywords AML Output: dsdt.aml - 17627 bytes, 591 named objects, 1709 executable opcodes Compilation complete. 0 Errors, 4 Warnings, 0 Remarks, 519 Optimizations Anyone here with DSDT experience who can help me fixing the DSDT?

    Read the article

  • OTRS upgrade 3.0 to 3.1 fails

    - by Valentin0S
    Today I've started upgrading OTRS from version 2.3 to 2.4 , 2.4 to 3.0 and 3.0 to 3.1. Everything went smoothly except the upgrade from 3.0 to 3.1 OTRS provides a few perl scripts which make the upgrade easier. I've used these scripts for each upgrade step. The upgrade from 3.0 to 3.1 fails at the following after using the upgrade script. scripts/DBUpdate-to-3.1.pl The error is : root@tickets:/opt/otrs# su - otrs $ scripts/DBUpdate-to-3.1.pl Migration started... Step 1 of 24: Refresh configuration cache... If you see warnings about 'Subroutine Load redefined', that's fine, no need to worry! Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAAuto.pm line 5. Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAuto.pm line 4. done. Step 2 of 24: Check framework version... done. Step 3 of 24: Creating DynamicField tables (if necessary)... done. DBD::mysql::db do failed: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)) at /opt/otrs-3.1.10/Kernel/System/DB.pm line 478. ERROR: OTRS-DBUpdate-to-3.1-10 Perl: 5.14.2 OS: linux Time: Wed Sep 5 15:36:20 2012 Message: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)), SQL: 'INSERT INTO dynamic_field (name, label, field_order, field_type, object_type, config, valid_id, create_time, create_by, change_time, change_by) VALUES (?, ?, ?, 'Text', 'Ticket', '--- {} ', 1, '2012-09-05 15:36:20' , 1, '2012-09-05 15:36:20' , 1)' Traceback (20405): Module: main::_DynamicFieldCreation (v1.85) Line: 466 Module: scripts/DBUpdate-to-3.1.pl (v1.85) Line: 95 Could not create new DynamicField TicketFreeKey1 at scripts/DBUpdate-to-3.1.pl line 477. Step 4 of 24: Create new dynamic fields for free fields (text, key, date)... $ Did anyone else face the same issue? Thanks in advance

    Read the article

  • postfix + opendkim not signing correctly. how to debug this?

    - by Dean Hiller
    EDIT: I did get a little further but all posts on my search say permissions are wrong or regenerate key but I fixed that to be 644 as well as owned by DKIM AND I keep regenerating the key but it is not helping. My latest error now is this Apr 21 21:19:12 Sniffy opendkim[8729]: BB5BF3AA66: dkim_eom(): resource unavailable: d2i_PrivateKey_bio() failed Apr 21 21:19:12 Sniffy postfix/cleanup[8627]: BB5BF3AA66: milter-reject: END-OF-MESSAGE from localhost[127.0.0.1]: 4.7.0 resource unavailable; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<abcs.com> I am looking for a way to simply debug this(don't necessarily need the answer but a way to get logs from opendkim would be good). If I stop opendkim, I see postfix log connection refused which is good. but when I send mail with opendkim started, I see no logs whatsoever. I even add the "LogWhy Yes" line to my opendkim.conf file as well and still see no logs there. Since I see opendkim running under user opendkim, I changed the owner of /etc/opendkim/* and /etc/opendkim and /etc/opendkim.conf all to opendkim user. I am running on ubuntu. My opendkim.conf file is # Log to syslog Syslog yes # Required to use local socket with MTAs that access the socket as a non- # privileged user (e.g. Postfix) UMask 002 # Sign for example.com with key in /etc/mail/dkim.key using # selector '2007' (e.g. 2007._domainkey.example.com) #Domain example.com Domain sniffyapp.com #KeyFile /etc/mail/dkim.key KeyFile /etc/opendkim/keys/sniffyapp.com/default.private #Selector 2007 Selector default # Commonly-used options; the commented-out versions show the defaults. #Canonicalization simple Mode sv #SubDomains no #ADSPDiscard no Socket inet:8891:localhost ExternalIgnoreList refile:/etc/opendkim/TrustedHosts InternalHosts refile:/etc/opendkim/TrustedHosts LogWhy Yes I of course have these lines added to main.cf in postgres smtpd_milters = inet:127.0.0.1:8891 non_smtpd_milters = $smtpd_milters milter_default_action = accept

    Read the article

  • Symfony2 app in subdirectory nginx

    - by Frido
    I'm trying to setup a symfony2 app in a subdirectory of our Server Webserver: nginx 1.1.6 + php fpm OS: gentoo my target is to get the app working from a subdirectory subdomain.xy.domain.tld/tool my nginx config looks like that server { listen 80; server_name subdomain.xy.domain.tld; error_log /var/log/nginx/subdomain.xy.error.log info; access_log /var/log/nginx/subdomain.xy.access.log main; location /tool { root /var/www/vhosts/subdomain.xy/tool/web; index app.php; location ~ \.php($|/) { include fastcgi_params; set $script $uri; set $path_info ""; if ($uri ~ "^(.+\.php)($|/)") { set $script $1; } if ($uri ~ "^(.+\.php)(/.+)") { set $script $1; set $path_info $2; } fastcgi_param SCRIPT_FILENAME /var/www/vhosts/subdomain.xy/tool/web$fastcgi_script_name; #fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_NAME $script; fastcgi_param PATH_INFO $path_info; } } } I have really no clue how to do this... I've searched the web for hours and tried dozens of different configs but nothing worked. I hope someone has an idea =)

    Read the article

  • Nginx case-insensitive reverse proxy rewrites

    - by BrianM
    I'm looking to setup an nginx reverse proxy to make some upcoming server moves and load balanced implementations much easier within our apps. Since our servers are all IIS case sensitivity hasn't been an issue, but now with nginx it's becoming one for me. I am simply looking to do a rewrite regardless of case. Infrastructure notes: All backend servers are IIS Most services are WCF services I am trying to simplify the URLs so I can move services around as we continue to build out I can't set my location to case insensitive due to the following error: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/sites-enabled/test.conf:101 The main part of my conf file where I am trying to handle the rewrite is as follows location /svc_test { proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header host $http_host; proxy_pass http://backend/serviceSite/WFCService.svc; } location ~* /test { rewrite ^/(.*)/$ /svc_test/$1 last; } It's the /test location that I can't get figured out. If I call http://nginxserver/svc_test/help I get the WCF help page to display correctly and I can make all available REST calls. This HAS to be a boneheaded regex issue on my part, but I have tried several variations and all I can get are 404 or 500 errors from nginx. This is NOT rocket science so can someone point me in the right direction so I can look like an idiot and just move on?

    Read the article

  • Couldn't upload files to Sharepoint site while passing through Squid Proxy

    - by Ecio
    Hi all, we have this issue: one of our employees is collaborating with a supplier and he needs to upload documents on a Sharepoint site hosted on the supplier's main site. In our environment we use Squid Proxy to allow people navigate on the net (we have NTLM authentication and users transparently authenticate while using IE and FF). It seems that this specific Sharepoint site is using Integrated Windows Authentication only, and according to some research on the net it seems that this can have troubles with proxies. More specifically, we have tried two Squid versions: with Squid 3.0 we are unable to login to the site (the browser loads an empty page) with Squid 2.7 (that supports "Connection Pinning") we are able to login into the site, move on the different sections BUT.. when we try to upload a file that is bigger than a couple of KiloBytes (i.e. 10KB) the browser loads an error page (i think it's a 401 unauthorized but i must verify it) we've tried changing a couple of Squid options (in 2.7), what we got is that when you try to upload the file you got an authentication box (just like the initial login) and it refuses to go on even if you enter the same authentication credentials. What's really strange is that when you try to upload a small file (i.e. a text or binary 1KB file) the upload succeeds. I initially thought that maybe there was something misconfigured on their Sharepoint site but I've tried also this site: www.xsolive.com (it's a sharepoint 2007 demo site) and I've experienced the same problem. Has any of you experienced such behaviour? Thanks! Of course we've suggested to the supplier to activate also Basic+SSL and we're waiting for their reply..

    Read the article

  • ORA-12705: Cannot access NLS data files or invalid environment specified

    - by Mohit Nanda
    I am getting this error while trying to create a connection pool, on my Oracle database, Oracle 10gR2. java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-12705: Cannot access NLS data files or invalid environment specified I am able to connect to the database over sqlplus & iSQLPlus client, but when I try to connect using this Java program, I get this error just when the connection pool is to be initialised and it does not initialise the connection pool. Can someone please help me resolving it? DB Version: Oracle version 10.2.0.1 OS: RHEL 4.0 Here is a barebone, java code which is throwing this error, while connecting to my database. import java.sql.*; public class connect{ public static void main(String[] args) { Connection con = null; CallableStatement cstmt = null; String url = "jdbc:oracle:thin:@hostname:1521:oracle"; String userName = "username"; String password = "password"; try { System.out.println("Registering Driver ..."); DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver()); System.out.println("Creating Connection ..."); con = DriverManager.getConnection(url, userName, password); System.out.println("Success!"); } catch(Exception ex) { ex.printStackTrace(System.err); } finally { if(cstmt != null) try{cstmt.close();}catch(Exception _ex){} if(con != null) try{con.close();}catch(Exception _ex){} } } }

    Read the article

  • phpMyAdmin: The additional features for working with linked tables have been deactivated.

    - by The Disintegrator
    I'm getting this error in the main page of phpMyAdmin verson: 3.2.1deb1 The additional features for working with linked tables have been deactivated. To find out why click here. When I click the link I get this report. $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... not OK [ Documentation ] General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... not OK [ Documentation ] Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... not OK [ Documentation ] $cfg['Servers'][$i]['pdf_pages'] ... not OK [ Documentation ] Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... not OK [ Documentation ] Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... not OK [ Documentation ] SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... not OK [ Documentation ] Designer: Disabled I already used the script to create the tables. I assigned the permissions to the pma user. And everything is set in /etc/phpmyadmin/conf.inc.php But it's still not working... The tables are empty. I assume that they should have something. I'm interested in the relations an history features. Obviously I have read the documentation. Maybe something else is unsetting those values? Any toughs?

    Read the article

  • sendmail: Error during delivery. Service not available, closing transmission channel

    - by user2810332
    I have a module in my system that will trigger an email and send it to user. The email will be sent to user when the product in my system is going to expired soon. I test the whole module in localhost and there is no problem with it. Then, I finally moved the module in my server but it gives this error. sendmail: Error during delivery: Service not available, closing transmission channel. It will also create a notepad in my desktop that contains information like this : command line : C:\wamp\sendmail\sendmail.exe -t -i executable : sendmail.exe exec. date/time : 2011-06-18 01:10 compiled with : Delphi 2006/07 madExcept version : 3.0l callstack crc : $fecf9b34, $5562b2fa, $5562b2fa exception number : 1 exception class : EIdSMTPReplyError exception message : Service not available, closing transmission channel. main thread ($15b0): 0045918a +003e sendmail.exe IdReplySMTP 501 +1 TIdReplySMTP.RaiseReplyError 0043ff28 +0008 sendmail.exe IdTCPConnection 576 +0 TIdTCPConnection.RaiseExceptionForLastCmdResult 004402f4 +003c sendmail.exe IdTCPConnection 751 +10 TIdTCPConnection.CheckResponse 0043feba +002a sendmail.exe IdTCPConnection 565 +2 TIdTCPConnection.GetResponse 004403fd +002d sendmail.exe IdTCPConnection 788 +4 TIdTCPConnection.GetResponse 0045ab97 +0033 sendmail.exe IdSMTP 375 +4 TIdSMTP.Connect 004b5f14 +1060 sendmail.exe sendmail 808 +326 initialization 77013675 +0010 kernel32.dll BaseThreadInitThunk thread $cf8: 77a400e6 +0e ntdll.dll NtWaitForMultipleObjects 77013675 +10 kernel32.dll BaseThreadInitThunk thread $1088: 77a41ecf +0b ntdll.dll NtWaitForWorkViaWorkerFactory 77013675 +10 kernel32.dll BaseThreadInitThunk May I know what is the problem of this error ? Is it something like firewall in the server that block my sendmail.exe or anything else ? FYI, I'm using Wamp and Sendmail to send the email. This is my first time seeing error like this. I need an explanation on this. Thank you.

    Read the article

  • Strange ZFS hidden filesystem problem

    - by RandomInsano
    Half of my ZFS filesystems are hidden in ZFS-fuse. Here's my story: So, I love ZFS. I used it for about six months on FreeBSD, but due to it crashing the kernel during heavy inter-filesystem IO load, I tried switching to Solaris 5.10. That was good, but when I attempted to do an import of my Version 13 pool into its Version 4 version of ZFS, there were some heafty problems. It may have tried to correct the filesystem definitions, I don't know. Since that version wasn't compatible with my pool, I've now switched to Ubuntu Server 10.4. That version more than supports that of my pool, but I can only see half of my filesystems. The filesystems I can see are the same as those Solaris could see. Now, despite those filesystems not being preset in a 'zfs list' command, I can still set properties on them and I can even still mount them and read and write files, but they just plain don't show up in 'zfs list'. I've mounted the major ones, but I'm not sure what other filesystems there are anymore (I have about eight that I can't see). Anyone have any idea what the heck is going on? I think I might try booting back into FreeBSD 8 (I still have the main boot drive laying around for that) and see if at least it is able to view the filesystems. I've also done a scrub while in Linux, and it found no errors with any of the data. Oddly, DMA read errors which caused problems on FreeBSD ZFS are reported by Linux, but ZFS-fuse doesn't find an error. That's a topic for another post however.

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

  • postfix sasl "cannot connect to saslauthd server: No such file or directory"

    - by innotune
    I try to setup postfix with smtp authentication. I want to use /etc/shadow as my realm Unfortunately I get a "generic error" when i try to authenticate # nc localhost 25 220 mail.foo ESMTP Postfix AUTH PLAIN _base_64_encoded_user_name_and_password_ 535 5.7.8 Error: authentication failed: generic failure In the mail.warn logfile i get the following entry Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: SASL authentication failure: Password verification failed Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: _ip_: SASL PLAIN authentication failed: generic failure However the sasl setup seems to be fine $ testsaslauthd -u _user_ -p _pass_ 0: OK "Success." i added smtpd_sasl_auth_enable = yes to the main.cf This is my smtpd.conf $ cat /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: PLAIN LOGIN saslauthd_path: /var/run/saslauthd/mux autotransition:true I tried this conf with the last two commands and without. I'm running debian stable. How can postfix find and connect to the saslauthd server? Edit: I'm not sure whether postfix runs in a chroot The master.cf looks like this: http://pastebin.com/Fz38TcUP saslauth is located in the sbin $ which saslauthd /usr/sbin/saslauthd The EHLO has this response EHLO _server_name_ 250-_server_name_ 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • Filesystems for webserver with SATA and Solid State disk,

    - by Jorisslob
    We have just ordered a new webserver with 120 Gb solid state disk and a SATA disk. I am trying to plan ahead what sort of filesystem to use. This system will be running Linux, Apache/Tomcat to host java services. The main service is a system where people can upload reasonably large files (in the order of 100 Mb, images, image stacks and video), which people will be able to annotate and which will be sent to a database server when annotation is complete. Thus far, I plan to put most of the utility programs of the operating system om the SSD and put the large media files there. The SATA disks will hold the less volitile data like apache, tomcat and the servlets. For filesystems I have considered going for the stable EXT3 because I hear that it is best supported. The downside seems to be that it not the ideal choice for large files. That is why I am leaning towards using XFS for the SSD and EXT3 for the SATA. My questions are: 1) Does this sound like a reasonable setup? 2) What filesystems would you recommend for the SSD and for the SATA? Thanks

    Read the article

  • Cant access Dell BMC IPMI Over IP

    - by Bobb
    I have Dell R210 with iDRAC BMC (new name for old BMC). Which is on-board feature with shared NIC (I believe). Server is on colocation and I didnt set it up before sent there... So I asked for the remote hands to setup IPMI Over IP. They enabled it, set the IP and everything. The IP is different than main box IP. Also the box is cabled to NIC1 and the BMC supposed to share it (am I right?) I can see new IP in the Open Server Administrator (installed on the box). I tried Supermicro IPMI tool and I tried Dell ipmish.exe command like this ipmish -ip xxx -u root -p calvin sysinfo gives BMC is not detected What could be wrong? is there a diagnostics tool I can try? It must be something obvious. I just never used things like that before.... P.S. I read something about encryptions key in the Dell docs. But I understand that is for encrypted IPMI 2.0 and ipmish can use IPMI 1.5 without encryption.

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

  • OpenVpn Iptables Error

    - by Mook
    I mean real newbie - linux here.. Please help me configuring my openvpn through iptables. My main goal here is to open port for regular browsing (80, 443), email (110, 25), etc just like isp does but i want to block p2p traffic. So I will need to open only few port. Here are my iptables config # Flush all current rules from iptables # iptables -F iptables -t nat -F iptables -t mangle -F # # Allow SSH connections on tcp port 22 (or whatever port you want to use) # iptables -A INPUT -p tcp --dport 22 -j ACCEPT # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT DROP #using DROP for INPUT is not always recommended. Change to ACCEPT if you prefer. iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept packets belonging to established and related connections # iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # #Accept connections on 1194 for vpn access from clients #Take note that the rule says "UDP", and ensure that your OpenVPN server.conf says UDP too # iptables -A INPUT -p udp --dport 1194 -j ACCEPT # #Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT #10.8.0.0 ? Check your OpenVPN server.conf to be sure iptables -A FORWARD -j REJECT iptables -t nat -A POSTROUTING -o venet0 -j SNAT --to-source 100.200.255.256 #Use your OpenVPN server's real external IP here # #Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward iptables -A INPUT -p tcp --dport 25 -j ACCEPT iptables -A INPUT -p tcp --dport 26 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 110 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -L -v But when I connect to my vpn, i can't browsing and also got RTO on pinging yahoo, etc

    Read the article

  • Vagrant synced folders aren't case sensitive

    - by lvmisooners
    For our web stack, we are moving from a Windows Server to CentOS. To facilitate development, we're utilizing Vagrant to run CentOS VMs locally. We're using Vagrant's Synced Folders feature to allow devs to use their favorite IDEs on their host machine, but we're finding that one key feature is missing from this setup: file system case sensitivity. The synced folder inside the VM apparently takes on the properties of the host's file system, so if I'm developing from a Windows machine, or even OSX, the file system isn't case sensitive. This is a big issue, as our production servers will be pure CentOS, and its file system will be case sensitive. Case sensitivity is one of the main reasons we wanted to have a local VM. We want to prevent "It works on my machine!" Some workarounds we've considered or tried: Use lsyncd to sync from the vagrant share to a location within the VM that is case sensitive updating files on the host doesn't seem to generate the events in the VM that lsync listens to Make a case-sensitive partition on the host (Doesn't work for Windows) Use samba this may be an option, but we haven't vetted it yet. Is there a better way? Note that we have developers using Windows, OS X, and Ubuntu, and the solution needs to work everywhere.

    Read the article

  • Audio Static/Interference regardless of audio interface?

    - by Tom
    I currently am running a media center/server on a Lubuntu machine. The machine specs: Core 2 Duo Extreme EVGA SLI 680i MotherBoard 2 GB DDR2 Ram 3 Hard Drives no raid - WD Caviar Black, Green, and Samsung Spinpoint Galaxy GTX 220 1GB External USB Creative XI-FI Extreme Card 550W Power Supply This machine is hooked up through an optical cable to an ONKYO HTR340 Receiver through the XIFI card. Whenever I play any audio regardless if it is through XBMC, the default audio player, a flash video, etc, I get a horrible static sound that randomly gets louder. Here is a video of the sound: http://www.youtube.com/watch?v=SqKQkxYRVA4 This static comes in randomly, sometimes going away for short periods, but eventually always comes back. So far I have tried everything I could think of: Reinstalling OS Installing/upgrading/repairing PulseAudio/Alsa Installing alternate OSes, straight Ubuntu, Lubuntu, Xubuntu, Arch, Mint, Windows 7 Switching audio from the external card to internal Optical, audio out through HDMI, audio out through headphones Different ports on receiver (my main desktop sounds fine on the same sound system) Different optical cables Unplugging everything unnecessary from the motherboard (1 HD, 1 Stick of Ram, 1 Keyboard) Swapping out ram Swapping out the motherboard Replacing the Graphics Card (was replaced due to fan being noisy, not specifically for this problem) Different harddrives Swapping power supply Disabling onboard audio Pretty much everything short of swapping the CPU. I haven't been able to narrow down the problem and it is getting frustrating. Is it possible that the CPU is faulty and might cause a problem such as this, or that the PC case is shorting out the motherboard? Any kind of suggestions will be appreciated.

    Read the article

  • How do I renew an expired Ubuntu OpenLDAP SSL Certificate

    - by Doug Symes
    We went through the steps of revoking an SSL Certificate used by our OpenLDAP server and renewing it but we are unable to start slapd. Here are the commands we used: openssl verify hostname_domain_com_cert.pem We got back that the certificate was expired but "OK" We revoked the certificate we'd been using: openssl ca -revoke /etc/ssl/certs/hostname_domain_com_cert.pem Revoking worked fine. We created the new Cert Request by passing it the key file as input: openssl req -new -key hostname_domain_com_key.pem -out newreq.pem We generated a new certificate using the newly created request file "newreq.pem" openssl ca -policy policy_anything -out newcert.pem -infiles newreq.pem We looked at our cn=config.ldif file and found the locations for the key and cert and placed the newly dated certificate in the needed path. Still we are unable to start slapd with: service slapd start We get this message: Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldap:/// ldapi:/// ldaps:///' -g openldap -u openldap -F /etc/ldap/slapd.d/ Here is what we found in /var/log/syslog Oct 23 20:18:25 ldap1 slapd[2710]: @(#) $OpenLDAP: slapd 2.4.21 (Dec 19 2011 15:40:04) $#012#011buildd@allspice:/build/buildd/openldap-2.4.21/debian/build/servers/slapd Oct 23 20:18:25 ldap1 slapd[2710]: main: TLS init def ctx failed: -1 Oct 23 20:18:25 ldap1 slapd[2710]: slapd stopped. Oct 23 20:18:25 ldap1 slapd[2710]: connections_destroy: nothing to destroy. We are not sure what else to try. Any ideas?

    Read the article

  • Samba: share home directories when home directories are symbolic links

    - by Owen
    I have set up a new Ubuntu 9.10 system for five users. In the system is a large LVM volume where all the data is to be kept. The main system disk is not for this purpose, so I attempted to move the home directories using usermod -d /var/data/username -m And started creating my shares for these new home locations. But then I thought: hey, Samba has built-in home directory sharing! So I enabled that, and it didn't work. The shares were not published to the network. Only the share for user 'owen' was published; his folder hadn't been moved. So I thought: maybe Samba home sharing only works for default home locations, so how about I move the home directories back to where they were, and then make them symlinks. root@boxenmkiv:/home# ls -l total 4 lrwxrwxrwx 1 brett brett 25 2010-04-03 08:48 brett -> /var/data/brett/ lrwxrwxrwx 1 carly carly 23 2010-04-03 08:48 carly -> /var/data/carly/ lrwxrwxrwx 1 dave dave 21 2010-04-03 08:48 dave -> /var/data/dave/ lrwxrwxrwx 1 kate kate 23 2010-04-03 08:47 kate -> /var/data/kate/ drwxr-xr-x 4 owen owen 4096 2010-04-03 08:44 owen Like so. Still no go. The only users share which is published to the network is 'owen' who as you can see above has not had his home directory moved. I have also added the following to my smb.conf [global] follow symlinks = yes wide symlinks = yes unix extensions = no With no luck. Am I going about doing this the entirely wrong way? Should I just give up and manually create shares for the users? Thanks in advance.

    Read the article

  • APC fragmention woes on Apache AWS EC2 Small instance with WordPress and W3TC

    - by two7s_clash
    AWS EC2 Small instance, Apache 2 running WordPress and W3TC. Within an hour, my APC fragmentation hits 100%. My APC settings are: apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 100M apc.optimization = 0 apc.num_files_hint = 512 apc.user_entries_hint = 1024 apc.ttl = 7200 apc.user_ttl = 7200 apc.gc_ttl = 3600 apc.cache_by_default = 1 apc.use_request_time = 1 apc.filters = "apc\.php$" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 2M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.rfc1867 = 0 apc.rfc1867_prefix = "upload_" apc.rfc1867_name = "APC_UPLOAD_PROGRESS" apc.rfc1867_freq = 0 apc.localcache = 0 apc.localcache.size = 256M apc.coredump_unmap = 0 apc.stat_ctime = 0 apc.canonicalize = 1 apc.lazy_functions = 0 apc.lazy_classes = 0 /etc/php.d/apc.ini More poop can be seen here. Mostly cribed settings from here. The shm was meant to be whittled down from such a high value after some observation, but apparently such a large value isn't even high enough.... I found an similar question/answer here. I do have some virtual hosts setup, but they aren't being touched much at all. Having users logged into the admin panel of WP does make things worse, but that's certainly not the main culprit. The question asker seems to suggest that it turns out W3TC is probably causing the problem, which the plugin author seems to agree with, but there aren't any helpful details beyond that. Why is it causing the problem? Do I just take it for now and turn off object caching with APC? Is there nothing I can do? Does having it turned on without being used for object caching actually help anything? Would memcache be an ok substitute just for object caching here? Finally, maybe I just shouldn't worry so much about the fragmentation?

    Read the article

  • FFMPEG Install on EC2 - Amazon Linux

    - by Oliver Holmberg
    Hello Serverfault friends, I am about two days into attempting to install FFMPEG with dependencies on an AWS EC2 instance running the Amazon Linux AMI. I've installed FFMPEG on Ubuntu and Fedora systems with no problems in the past, and have read reportedly successful instructions on installing on Red Hat/Fedora. I have followed a number of tutorials and forum articles to do so, but have had no luck yet. As far as I can tell, the main problems are as followed: The amazon linux (Most similar to red-hat/centos) yum repositories don't have ffmpeg available. I have found instructions to update the repositories to include the required packages, but adding these repositories cause yum to fail in updating packages. (Also, I've read some cautionary tales about adding redhat/centos repositories to amazon linux that lead me to believe it may be a bad idea) (https://forums.aws.amazon.com/thread.jspa?messageID=229166) I have tried a more complicated method of downloading the source tarball, compiling, and installing, but this always fails due to missing dependencies and other errors. On to my question: Has anyone successfully installed FFMPEG on Amazon Linux? Is there a fundamental incompatibility? If anyone could share specific instructions on installing ffmpeg on amazon linux I would be greatly appreciative. Any other insights/experiences would also be appreciated. Thanks in advance, Oliver

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >