Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 124/785 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Git from localhost to remotehost with a team of three

    - by Mark McDonnell
    Hi, I'm completely new to Git. I've only just worked out how to use Github in a basic way (e.g. push my local file changes to Github - so I've not done 'pulling' down of content from Github and 'merging' it into my localhost version or anything like that). I had a look over at this existing question - Git: localhost remote development remote production - but I think it may have been a bit advanced for me at this stage as I didn't quite understand the terminology that most of the people were using. What I would like to achieve is to have a local server set-up that my team of developers can all 'push' to/'pull' from etc. And then have that local server upload any updated files automatically to our web server so we could see the updates live in the browser. I'm happy to get a server set-up in the office running Mac OSX Server and then installing Git on it and then getting the devs to write a shell script to push to the remote server but only if it was fairly easy for the devs local git to push to this new local server. I'm not a network engineer so I don't know what would need to be set-up for that to work, I know obviously we could set-up the server to be accessible via a local ip address like 192.168.0.xxx but not sure how that works with pushing to a git repository on that server? Would that literally be something like doing this on my local machine: git remote add MyGitFile git://192.168.0.xxx/MyGitFile.git ? Any ideas or advice you can give to a total Git newbie trying to help his team get a better work flow. Kind regards, Mark

    Read the article

  • Django: DatabaseError column does not exist

    - by Rosarch
    I'm having a problem with Django 1.2.4. Here is a model: class Foo(models.Model): # ... ftw = models.CharField(blank=True) bar = models.ForeignKey(Bar) Right after flushing the database, I use the shell: Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from apps.foo.models import Foo >>> Foo.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 67, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 82, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 271, in iterator for row in compiler.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 677, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 732, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) DatabaseError: column foo_foo.bar_id does not exist LINE 1: ...t_omg", "foo_foo"."ftw", "foo_foo... What am I doing wrong here?

    Read the article

  • Error in django using Apache & mod_wsgi

    - by Ignacio
    Hey, I've been doing some changes to my django develpment env, as some of you suggested. So far I've managed to configure and run it successfully with postgres. Now I'm trying to run the app using apache2 and mod_wsgi, but I ran into this little problem after I followed the guidelines from the django docs. When I access localhost/myapp/tasks this error raises: Request Method: GET Request URL: http://localhost/myapp/tasks/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: argument 1 must be a string or unicode object Original Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/local/lib/python2.6/dist-packages/django/template/defaulttags.py", line 126, in render len_values = len(values) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 81, in __len__ self._result_cache = list(self.iterator()) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) TypeError: argument 1 must be a string or unicode object ... ... ... And then it highlights a {% for t in tasks %} template tag, like the source of the problem is there, but it worked fine on the built-in server. The view associated with that page is really simple, just fetch all Task objects. And the template just displays them on a table. Also, some pages get rendered ok. Don't want to fill this Question with code, so if you need some more info I'd be glad to provide it. Thanks

    Read the article

  • OWA, Outlook Anywhere, RPCPing Inconsistencies

    - by pk.
    I'm troubleshooting an Outlook Anywhere issue with a new Exchange 2010 server. The server in question, MS2010, is behind a SonicWALL NSA 2400 device and works wonderfully except for Outlook Anywhere. Outlook Anywhere works internally and I've verified (through Ctrl-Right Click --> Connection Status) that I'm able to connect to MS2010 over HTTPS. When trying to connect to the server using HTTPS from outside the firewall, I'm unable to do so. A Wireshark trace shows 30 or so successful HTTPS packet transmissions, and then it fails with 3 straight transmissions to a destination port of 135. I have no idea why my computer is attempting to access anything on port 135 since I've setup my profile to use HTTPS on both slow and fast connections. I'm 99% certain that the firewall is configured correctly. I run Outlook Web Access (also HTTPS) on the same server and there are no issues with access. EDIT: My Autodiscover settings are correct (as far as I can tell). My server passes the Outlook Anywhere and Autodiscover tests at https://www.testexchangeconnectivity.com/. I've been using the RPCPing utility to troubleshoot and have come across the following results: Internally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v2.12. Copyright (C) Microsoft Corporation, 2002 OS Version is: 6.1, Service Pack 1 RPCPinging proxy server mail.mydomain.com with Echo Request Packet Sending ping to server Response from server received: 200 Pinging successfully completed in 93 ms Externally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v6.0. Copyright (C) Microsoft Corporation, 2002-2006 Enter password for RPC/HTTP proxy: RPCPing set Activity ID: {fc8411ba-2987-4175-b37b-801dc69d5ff9} RPCPinging proxy server mail.mydomain.com with Echo Request Packet Setting autologon policy to high WinHttpSetCredentials for target server called Error 87 : The parameter is incorrect. returned in WinHttpSetCredentials Ping failed What should I be checking in order to troubleshoot my Outlook Anywhere issues? I'm using Windows 7 SP1 for internal and external access. EDIT: Autodiscover.xml content <?xml version="1.0"?> <Autodiscover xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006"> <Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a"> <User> <DisplayName>John Doe</DisplayName> <LegacyDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=pk</LegacyDN> <DeploymentId>d35170cc-f3a7-42c5-9427-1f554a469126</DeploymentId> </User> <Account> <AccountType>email</AccountType> <Action>settings</Action> <Protocol> <Type>EXCH</Type> <Server>MS2010.MYDOMAIN.local</Server> <ServerDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010</ServerDN> <ServerVersion>738180DA</ServerVersion> <MdbDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010/cn=Microsoft Private MDB</MdbDN> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> <OOFUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</OOFUrl> <OABUrl>http://MS2010.MYDOMAIN.local/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://MS2010.MYDOMAIN.local/EWS/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <PublicFolderServer>MS2007.MYDOMAIN.local</PublicFolderServer> <AD>dc1.MYDOMAIN.local</AD> <EwsUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</EwsUrl> <EcpUrl>https://MS2010.MYDOMAIN.local/ecp/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>EXPR</Type> <Server>mail.mycompany.com</Server> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> <OOFUrl>https://mail.mycompany.com/ews/exchange.asmx</OOFUrl> <OABUrl>https://mail.mycompany.com/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://mail.mycompany.com/ews/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <SSL>On</SSL> <AuthPackage>Basic</AuthPackage> <CertPrincipalName>msstd:mail.mycompany.com</CertPrincipalName> <EwsUrl>https://mail.mycompany.com/ews/exchange.asmx</EwsUrl> <EcpUrl>https://mail.mycompany.com/owa/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>WEB</Type> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <Internal> <OWAUrl AuthenticationMethod="Basic, Fba">https://MS2010.MYDOMAIN.local/owa/</OWAUrl> <Protocol> <Type>EXCH</Type> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> </Protocol> </Internal> <External> <OWAUrl AuthenticationMethod="Fba">https://mail.mycompany.com/owa/</OWAUrl> <Protocol> <Type>EXPR</Type> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> </Protocol> </External> </Protocol> </Account> </Response> </Autodiscover>

    Read the article

  • How to configure fastcgi to work with ligttpd in ubuntu

    - by michael
    I am able to run lighttpd on ubuntu 9.10. But when i tried to setup fastcgi with lighttpd by putting this in the ligttpd.conf file: #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => "9098", "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) This is what I get in the error.log in ligttpd: 2010-03-07 21:00:11: (log.c.166) server started 2010-03-07 21:00:11: (mod_fastcgi.c.1104) the fastcgi-backend /usr/local/bin/cgi-fcgi failed to start: 2010-03-07 21:00:11: (mod_fastcgi.c.1108) child exited with status 1 /usr/local/bin/cgi-fcgi 2010-03-07 21:00:11: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2010-03-07 21:00:11: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2010-03-07 21:00:11: (server.c.931) Configuration of plugins failed. Going down. I do have cgi-fcgi in /usr/local/bin: $ which cgi-fcgi /usr/local/bin/cgi-fcgi Thank you for your help.

    Read the article

  • Dynamic DNS updates for Linux and Mac OS X machines with a Windows DNS server

    - by DanielGibbs
    My network has a Windows machine running Server 2008 R2 which provides DHCP and DNS. I'm not particularly familiar with Windows domains, but the domain is set to home.local and that is the DNS domain name provided with DHCP leases. Everything works fine for Windows machines, they get the lease and update the server with their hostname and the server creates a DNS records for windowshostname.home.local. I am having problems obtaining the same functionality on Linux (Debian) and Mac OS X (Mountain Lion) machines. They receive DHCP just fine, but DNS entries are not being created on the server for them. On the Mac OS X machine, hostname gives an output of machostname.local, and on the Linux machine hostname --fqdn also gives an output of linuxhostname.local. I'm assuming that the server is not creating DNS entries because the domain does not match that of the server (home.local). I don't want to statically configure these machines to be part of the home.local domain, I just want them to pick it up from DHCP and be able to have entries in the DNS server. How should I go about doing this?

    Read the article

  • ssh tunnel - bind: Cannot assign requested address

    - by JosephK
    Trying to create a socks (-D) ssh tunnel - Linux box to Linux box (both centos): sshd running on remote side ok. From local machine we do / see this: ssh -D 1080 [email protected]. [email protected]'s password: bind: Cannot assign requested address (where 8.8.8.8 is really my server's IP and 'user' is my real username) I am logged into the remote side in this terminal-window. I can verify that the local port was unused prior to this command, and then used by an ssh process, after the command, via: netstat -lnp | grep 1080 So, unlike most googled-responses with this error, the problem would not seem to be the loopback interface assignment. If I try to use this tunnel with a mail client, the local-side permits the attempt (no 'proxy-failed' error), but no data / reply is returned. On the remote side, I do have "PermitTunnel yes" in my sshd_config (though 'yes' should be the default, anyway). Ideas or Clues? Here is the relevant debug-output OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * .... debug1: Authentication succeeded (password). debug1: Local connections to LOCALHOST:1080 forwarded to remote address socks:0 debug1: Local forwarding listening on 127.0.0.1 port 1080. debug1: channel 0: new [port listener] debug1: Local forwarding listening on ::1 port 1080. bind: Cannot assign requested address debug1: channel 1: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 Other clue: If I run a Virtual Box on the client running Windows, open a tunnel with putty in that box, that tunnel, to the same remote server, works.

    Read the article

  • ubuntu 9.10 cups server cupsd.conf

    - by aaron
    i have a cups server running on ubuntu 9.10 on my home network. right now i can access it at 192.168.1.101:631, but when i try to access it at myservername.local:631 i get a 400 Bad Request. here's the relevant section from my current cupsd.conf: ServerName 192.168.1.101 # Only listen for connections from the local machine. Listen localhost:631 Listen /var/run/cups/cups.sock # any of the below 'Listen' directives all yield the same result Listen 192.168.1.101:631 #Listen *:631 #Listen myservername.local:631 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd BrowseAddress 192.168.1.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order deny,allow Deny from All Allow from 127.0.0.1 Allow from 192.168.1.* </Location> # Restrict access to the admin pages... <Location /admin> Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> i get the following in /var/log/cups/error_log: E [03/Jan/2010:18:33:41 -0600] Request from "192.168.1.100" using invalid Host: field "myservername.local:631" what do i need to do to be able to access the cups server at both 192.168.1.101:631 and myservername.local:631?

    Read the article

  • Sending USR2 to mongrel_rails sometimes results in an “Address already in use” on the restart

    - by Ben
    We have a rolling-restart mode for our mongrel cluster that sends a USR2 signal to each running process. This works great, most of the time. But very occasionally the mongrel process will shutdown, and then fail to restart, with the following error: /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize_without_backlog': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `listener' Looking though the mongrel source, the USR2 handler calls a synchronous stop on the running server, so it ought to block until the socket has been released. Has anyone seen this error? Does anyone have any ideas what might cause it? (I asked this question over on StackOverflow initially, but thought it might be more appropriate here)

    Read the article

  • Winbind group lookup painfully slow

    - by Marty
    I am running winbind on an RHEL 6 system. Everything works fine except group lookups, so many commands (including sudo) are painfully slow. I did an strace which shows that winbind looks up every group and every user within each group for the current user. Some of these groups have 20000+ users so a simple sudo can take 60 seconds to complete. I really only care about speeding up the sudo command. Ideal solutions would make it so either: groups with more than X number of users will not be looked up, or sudo bypasses group lookups altogether. Here is my current "smb.conf" for winbind: workgroup = EXAMPLE password server = AD1.EXAMPLE.ORG realm = EXAMPLE.ORG security = ads idmap uid = 10000-19999 idmap gid = 10000-19999 idmap config EXAMPLE:backend = rid idmap config EXAMPLE:range = 10000000-19999999 winbind enum users = no winbind enum groups = no winbind separator = + template homedir = /home/%U template shell = /bin/bash winbind use default domain = yes winbind offline logon = false

    Read the article

  • how to get Geo::Coder::Many with cpan?

    - by mnemonic
    Ubuntu is installed for development of a Perl project. aptitude search Geo-Coder i libgeo-coder-googlev3-perl - Perl module providing access to Google Map Aptitude does not refer to Geo::Coder::Many cpan can not build it. sudo cpan Geo::Coder::Many Then: CPAN: Storable loaded ok (v2.27) Going to read '/home/jh/.cpan/Metadata' Database was generated on Wed, 16 Oct 2013 06:17:04 GMT Running install for module 'Geo::Coder::Many' Running make for K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz CPAN: Digest::SHA loaded ok (v5.61) CPAN: Compress::Zlib loaded ok (v2.033) Checksum for /home/jh/.cpan/sources/authors/id/K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz ok CPAN: File::Temp loaded ok (v0.22) CPAN: Parse::CPAN::Meta loaded ok (v1.4401) CPAN: CPAN::Meta loaded ok (v2.110440) CPAN: Module::CoreList loaded ok (v2.49_02) CPAN: Module::Build loaded ok (v0.38) CPAN.pm: Going to build K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz Can't locate Geo/Coder/Many/Google.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. Can't locate Geo/Coder/Many/Google in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. BEGIN failed--compilation aborted at Build.PL line 54. Warning: No success on command[/usr/bin/perl Build.PL --installdirs site] CPAN: YAML loaded ok (v0.77) KAORU/Geo-Coder-Many-0.42.tar.gz /usr/bin/perl Build.PL --installdirs site -- NOT OK Running Build test Make had some problems, won't test Running Build install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites Any suggestions how to resolve this issue?

    Read the article

  • unlinked libraries in a makefile?

    - by wyatt
    I'm trying to install a libspopc, but when I run the make I get the following output: cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c session.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c queries.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c parsing.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c format.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c objects.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c libspopc.c rm -f libspopc*.a ar r libspopc-0.9n.a session.o queries.o parsing.o format.o objects.o libspopc.o ar: creating libspopc-0.9n.a ranlib libspopc-0.9n.a ln -s libspopc-0.9n.a libspopc.a rm -f libspopc*.so cc -o libspopc-0.9n.so -shared session.o queries.o parsing.o format.o objects.o libspopc.o ln -s libspopc-0.9n.so libspopc.so cc -o poptest1 -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL examples/poptest1.c -L. -lspopc -lssl -lcrypto /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_globallookup': dso_dlfcn.c:(.text+0x2d): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x43): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x4d): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_pathbyaddr': dso_dlfcn.c:(.text+0x8f): undefined reference to `dladdr' dso_dlfcn.c:(.text+0xe9): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_func': dso_dlfcn.c:(.text+0x491): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x570): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_var': dso_dlfcn.c:(.text+0x5f1): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x6d0): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_unload': dso_dlfcn.c:(.text+0x735): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_load': dso_dlfcn.c:(.text+0x817): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x88e): undefined reference to `dlclose' dso_dlfcn.c:(.text+0x8d5): undefined reference to `dlerror' collect2: ld returned 1 exit status make: *** [poptest1] Error 1 A quick search suggested that this was due to libdl being unlinked, though this seems unlikely in a distributed library, particularly a seemingly relatively popular one. Could anything else be causing this? And if it is due to an unlinked library, how would I go about fixing it? Thanks

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • MemCache-repcached compile error

    - by Ramy Allam
    I'm trying to install [memcached-1.2.8-repcached-2.2.1]( http://sourceforge.net/projects/repcached/files/latest/download?source=files) And I have the following error after running the make command: make all-recursive make[1]: Entering directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' Making all in doc make[2]: Entering directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1/doc' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1/doc' make[2]: Entering directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' gcc -DHAVE_CONFIG_H -I. -DNDEBUG -g -O2 -MT memcached-memcached.o -MD -MP -MF .d eps/memcached-memcached.Tpo -c -o memcached-memcached.o test -f 'memcached.c' || echo './'memcached.c memcached.c: In function ‘add_iov’: memcached.c:697: error: ‘IOV_MAX’ undeclared (first use in this function) memcached.c:697: error: (Each undeclared identifier is reported only once memcached.c:697: error: for each function it appears in.) make[2]: * [memcached-memcached.o] Error 1 make[2]: Leaving directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' make[1]: * [all-recursive] Error 1 make[1]: Leaving directory `/usr/local/src/pro/memcached-1.2.8-repcached-2.2.1' make: * [all] Error 2 OS : Centos5.7 64bit gcc-4.1.2-51.el5 gcc-c++-4.1.2-51.el5 libgcc-4.1.2-51.el5 Note : Memcached and memcache extension for php are already installed root@server[~]# memcached -h memcached 1.4.5 php ext http://pecl.php.net/get/memcache-2.2.6.tgz

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • Group Policy GPO not 'seen' at client

    - by fukawi2
    I have a new OU (natorg.local\NATO\Users) that I am trying to apply GP to. I have created a new user in this OU, and linked the 3 GPO's to this OU: DESKTOP - Folder Redirection (AppData) DESKTOP - Folder Redirection (Desktop) DESKTOP - Folder Redirection (Documents) Hopefully the names are sufficient to suggest what they do exactly. The settings are under User Settings so there is no Loopback processing required (if my understanding is correct). GP Modelling for the user and specific computer says that the GPOs will/should be applied, however on the client, gpresult doesn't even appear to see the GPOs under either "Applied" or "Not Applied": USER SETTINGS -------------- CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local Last time Group Policy was applied: 25/06/2012 at 11:07:13 AM Group Policy was applied from: svr-addc-01.natorg.local Group Policy slow link threshold: 500 kbps Applied Group Policy Objects ----------------------------- LAPTOPS - Power Settings WSUS - Set Server Address OUTLOOK - Auto Archive SECURITY - Lock Screen After Idle Default Domain Policy DESKTOP - Regional Settings NETWORK - Proxy Configuration NETWORK - IE General Config OFFICE - Trusted Locations OFFICE - Increase Privacy OUTLOOK - Disable Junk Filter DESKTOP - Disable Windows Error Reporting DESKTOP - Hide Language Bar NETWORK - Disable Skype DESKTOP - Disable Thumbs.db Creation WSUS - Set Server Address The following GPOs were not applied because they were filtered out ------------------------------------------------------------------- Local Group Policy Filtering: Not Applied (Empty) NETWORK - Google Chrome Configuration Filtering: Not Applied (Empty) SYSTEM - Event Log Configuration Filtering: Not Applied (Empty) SECURITY - Local Administrator Password Filtering: Not Applied (Empty) NETWORK - Disable Windows Messenger Filtering: Not Applied (Empty) SECURITY - Audit Policy Filtering: Not Applied (Empty) WSUS - Automatic Install Filtering: Not Applied (Empty) NETWORK - Firewall Configuration Filtering: Not Applied (Empty) DESKTOP - Enable Offline Files Filtering: Not Applied (Empty) I haven't altered permissions on the GPO's at all, no WMI filtering... As I said, GP Modelling says that they should be applied. GPResult on the client correctly identifies itself as being the correct OU (CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local) There are 2 x 2008R2 and a 2003 DC, domain is 2003 level, client is Windows XP SP3. Can anyone suggest why these GP Objects would be "invisible" to the client?

    Read the article

  • GPO best practices : Security-Group Filtering Versus OU

    - by Olivier Rochaix
    Good afternoon everyone, I'm quite new to Active Directory stuff. After upgraded Functional level of our AD from 2003 to 2008 R2 (I need it to put fine-grained password policy), I then start to reorganized my OUs. I keep in mind that a good OU organization facilitate application of GPO (and maybe GPP).But in the end, it feels more natural for me to use Security-group filtering (from Scope tab) to apply my policies, instead of direct OU. Do you think it is a good practice or should I stick to OU ? We are a small organisation with 20 users and 30-35 computers. So, we got a simple OU tree, but more subtle split with security-groups. The OU tree doesn't contain any objects except at the bottom level. Each bottom level OU contains Computers,Users, and of course security groups. These security groups contains Users & Computers of the same OU. Thanks for your advices, Olivier

    Read the article

  • Puppet and Vim fighting over Ruby version

    - by devians
    I have installed puppet from the .dmg from puppetlabs. If I remove ruby 1.9.3, puppet works, but other things like my vim install (dependant plugins) do not. According to http://docs.puppetlabs.com/guides/platforms.html#ruby-versions 1.9.3 is supported. So whats going wrong with puppet? % uname -a Darwin Kusanagi.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 % which ruby /usr/local/bin/ruby % ruby --version ruby 1.9.3p327 (2012-11-10 revision 37606) [x86_64-darwin11.4.2] % /usr/bin/ruby --version ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin11.0] % brew info ruby 1 ? ruby: stable 1.9.3-p327, HEAD http://www.ruby-lang.org/en/ Depends on: pkg-config, readline, gdbm, libyaml /usr/local/Cellar/ruby/1.9.3-p327 (796 files, 17M) * https://github.com/mxcl/homebrew/commits/master/Library/Formula/ruby.rb ==> Options --with-tcltk Install with Tcl/Tk support --with-suffix Suffix commands with "19" --universal Build a universal binary --with-doc Install documentation ==> Caveats NOTE: By default, gem installed binaries will be placed into: /usr/local/Cellar/ruby/1.9.3-p327/bin You may want to add this to your PATH. % puppet /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- puppet/util/command_line (LoadError) from /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/bin/puppet:3:in `<main>'

    Read the article

  • Configure Nagios To Alert Depending On Host Group That Service Alert Originates From

    - by StrangeWill
    So my setup: Services are shared between all hosts (CPU/RAM/Disk/Services). Hosts are split into two main groups: "Production" and "Development". We have two contact groups: "Production" and "Development". Lets say my development SQL server runs low on RAM, I want it to only alert those in "Development" contact group (this service is of course assigned to a host in the "Development" host group, using the shared RAM monitoring service). I'm pretty much stumped on this... I can't configure it at the service level (they're shared there), and I can't seem to get escalations to do it for me either... Do I need to use service groups along with escalations and bite the bullet on building that list? Or am I missing something stupidly simple? I'm using Centreon for configuration if that helps.

    Read the article

  • VirtualHost not using correct SSL certificate file

    - by Shawn Welch
    I got a doozy of a setup with my virtual hosts and SSL. I found the problem, I need a solution. The problem is, the way I have my virtual hosts and server names setup, the LAST VirtualHost directive is associating the SSL certificate file with the ServerName regardless of IP address or ServerAlias. In this case, SSL on www.site1.com is using the cert file that is established on the last VirtualHost; www.site2.com. Is this how it is supposed to work? This seems to be happening because both of them are using the same ServerName; but I wouldn't think this would be a problem. I am specifically using the same ServerName for a purpose and I really can't change that. So I need a good fix for this. Yes, I could buy another UCC SSL and have them both on it but I have already done that; these are actually UCC SSLs already. They just so happen to be two different UCC SSLs. <VirtualHost 11.22.33.44:80> ServerName somename ServerAlias www.site1.com UseCanonicalName On RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 11.22.33.44:443> ServerName somename ServerAlias www.site1.com UseCanonicalName On SSLEngine on SSLCertificateFile /usr/local/apache/conf/ssl.crt/cert1.crt SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/cert1.key SSLCertificateChainFile /usr/local/apache/conf/chain/gd_bundle.crt RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 55.66.77.88:80> ServerName somename ServerAlias www.site2.com UseCanonicalName On RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 55.66.77.88:443> ServerName somename ServerAlias www.site2.com UseCanonicalName On SSLEngine on SSLCertificateFile /usr/local/apache/conf/ssl.crt/cert2.crt SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/cert2.key SSLCertificateChainFile /usr/local/apache/conf/chain/gd_bundle.crt RewriteEngine On RewriteOptions Inherit </VirtualHost>

    Read the article

  • ssh key questions

    - by Tim
    I have some questions regarding generating keys for ssh access: (1) Supposed there are two computers running ssh server service and I have generated a pair of key files on computer A and copy the public file to computer B. Is it true that this is only a one-way key: We only gave computer A permission to access computer B, not gave computer B permission to access computer A? If I now want to ssh from computer B to computer A, must I generat another pair of key files on computer B and copy the public file to computer A? (2) If I would like to connect a single local computer to several remote servers, is it to generate a common pair of key files only once on the local and copy the same public file to the remote servers, or to generate different pair of key files on the local for different remote servers? (3) If I would like to connect several local computers to a single remote server, when copying the public files from different local computers to the remote server, is it to combine them together into a single authorized_keys file or store them in different authorized_keys files? (4) If there are several servers shared the same file system by, for example, NFS, how to generate keys and arrange the key files for accessing from one server to the other? Also how to still generate keys and arrange the key files for a local computer to access anyone of the servers? All the machines above are Linux.Please provide examples and commands in your reply so that I can better understand how to solve the problems. Thanks and regards!

    Read the article

  • Compile PHP 5.3.2 with intl extension on Snow Leopard 10.6.3

    - by fsb
    Does anyone have some tips on compiling PHP's intl extension on PHP? I'm getting compile errors each way I try it and I've been googling for ages and getting nowhere. Any help greatly appreciated. When make gets to the huge gcc command to compile libphp5.bundle, I get the following error: Undefined symbols: "___gxx_personality_v0", referenced from: icu_4_2::MessageFormatAdapter::getArgTypeList(icu_4_2::MessageFormat const&, int&)in msgformat_helpers.o _umsg_parse_helper in msgformat_helpers.o _umsg_format_arg_count in msgformat_helpers.o _umsg_format_helper in msgformat_helpers.o CIE in msgformat_helpers.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [libs/libphp5.bundle] Error 1 My compile commands are: MACOSX_DEPLOYMENT_TARGET=10.6 CFLAGS="-arch x86_64 -g -Os -pipe -no-cpp-precomp" CCFLAGS="-arch x86_64 -g -Os -pipe" CXXFLAGS="-arch x86_64 -g -Os -pipe" LDFLAGS="-arch x86_64 -bind_at_load" export CFLAGS CXXFLAGS LDFLAGS CCFLAGS MACOSX_DEPLOYMENT_TARGET ./configure --prefix=/usr \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --sysconfdir=/private/etc \ --with-apxs2=/usr/sbin/apxs \ --enable-cli \ --with-config-file-path=/etc \ --with-libxml-dir=/usr \ --with-openssl=/usr \ --with-zlib=/usr \ --with-bz2=/usr \ --with-curl=/usr \ --with-gd \ --with-jpeg-dir=/src/jpeg/jpeg-local \ --with-png-dir=/usr/X11R6 \ --with-freetype-dir=/usr/X11R6 \ --with-xpm-dir=/usr/X11R6 \ --with-ldap=/usr \ --with-ldap-sasl=/usr \ --enable-mbstring \ --enable-mbregex \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-pdo-mysql=mysqlnd \ --with-mysql-sock=/var/mysql/mysql.sock \ --with-iodbc=/usr \ --enable-shmop \ --with-snmp=/usr \ --enable-soap \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --with-xmlrpc \ --with-iconv-dir=/usr \ --with-xsl=/usr \ --with-pcre-regex=/src/pcre/pcre-local/usr/local \ --with-pcre-dir=/src/pcre/pcre-local/usr/local \ --with-icu-dir=/usr/local \ --enable-intl export EXTRA_CFLAGS="-lresolv" make

    Read the article

  • Grant a user access to directories shared by root (mod: 770)

    - by Paul Dinham
    I want to grant a user (username: paul) access to all directories shared by root with mod 770. I do it this way: groups root (here comes a list of groups in which root user is) usermod -a -G group1 paul usermod -a -G group2 paul usermod -a -G group3 paul ... All the 'group1', 'group2', 'group3' are seen in the group list of root user. However, after adding 'paul' to all groups above, he still can not write to directories shared by root user with mod 770. Did I do it wrongly?

    Read the article

  • How to install RMagick RubyGem on Mac OS X 10.6 Snow Leopard?

    - by misbehavens
    I am getting this error while trying to install RMagick: $ sudo gem install rmagick Building native extensions. This could take a while... ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... no Can't install RMagick 2.13.1. Can't find Magick-config in /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin:~/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out How can I install the RMagick RubyGem on Snow Leopard?

    Read the article

  • Remote installation and configuration software, preferrably open source

    - by Brimstedt
    Hello, Im looking for some remote-installation software. I've looked briefly at unattended, opsi and a bunch of stuff, but there is nowhere near enough time to evaluate them all, and they are rather complex to setup so some insight would be very appreciated. This is foremost for windows clients, but linux support would be good. Something like apt-get would've been great. Requirements: Simple to setup and use Set up groups of users (developers, management, sales, etc) Chose which software to be installed for different groups Add new software to groups and it will be automatically installed on client Dependencies between software Nice-to-have: Linux client support OS-unattended installation thanks in advance

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >