Search Results

Search found 27238 results on 1090 pages for 'local variable'.

Page 235/1090 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • "Exception: No extension found at None" when trying on use Selenium Firefox WebDriver on a Mac

    - by Gj
    Any ideas? In [1]: from selenium.firefox.webdriver import WebDriver In [2]: d=WebDriver() --------------------------------------------------------------------------- Exception Traceback (most recent call last) /usr/local/selenium-read-only/<ipython console> in <module>() /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/webdriver.pyc in __init__(self, profile, timeout) 48 profile = FirefoxProfile(name=profile) 49 if not profile: ---> 50 profile = FirefoxProfile() 51 self.browser.launch_browser(profile) 52 RemoteWebDriver.__init__(self, /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/firefox_profile.pyc in __init__(self, name, port, template_profile, extension_path) 72 73 if name == ANONYMOUS_PROFILE_NAME: ---> 74 self._create_anonymous_profile(template_profile) 75 self._refresh_ini() 76 else: /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/firefox_profile.pyc in _create_anonymous_profile(self, template_profile) 82 self._copy_profile_source(template_profile) 83 self._update_user_preference() ---> 84 self.add_extension(extension_zip_path=self.extension_path) 85 self._launch_in_silent() 86 /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/firefox_profile.pyc in add_extension(self, force_create, extension_zip_path) 152 not os.path.exists(extension_source_path)): 153 raise Exception( --> 154 "No extension found at %s" % extension_source_path) 155 156 logging.debug("extension_source_path : %s" % extension_source_path) Exception: No extension found at None

    Read the article

  • Where does django look for sqlite3 installation/libraries?

    - by gath
    Am having a bit of a problem making my django application run in SUSE linux 9. I have Python2.5 installed well, Django 1.0 installed well. Am able to execute django command django-admin startproject fine But when i run the runserver command i get the error below. i have a folder with sqlite3, i can go in there and actually run the sqlite3* application, now am wondering where does Django look for the sqlite libraries? and how can i fix this? Validating models... Unhandled exception in thread started by <function inner_run at 0x2a96cb4f50> Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run self.validate(display_num_errors=True) File "/usr/local/lib/python2.5/site-packages/django/core/management/base.py", line 122, in validate num_errors = get_validation_errors(s, app) File "/usr/local/lib/python2.5/site-packages/django/core/management/validation.py", line 22, in get_validation_errors from django.db import models, connection File "/usr/local/lib/python2.5/site-packages/django/db/__init__.py", line 16, in <module> backend = __import__('%s%s.base' % (_import_path, settings.DATABASE_ENGINE), {}, {}, ['']) File "/usr/local/lib/python2.5/site-packages/django/db/backends/sqlite3/base.py", line 27, in <module> raise ImproperlyConfigured, "Error loading %s module: %s" % (module, exc) django.core.exceptions.ImproperlyConfigured: Error loading sqlite3 module: No module named _sqlite3 Gath

    Read the article

  • Working with mongodb from Java

    - by demas
    I have launch mongodb server: [demas@arch.local.net][~]% mongod --dbpatmongod --dbpath /home/demas/temp/ Mon Apr 19 09:44:18 Mongo DB : starting : pid = 4538 port = 27017 dbpath = /home/demas/temp/ master = 0 slave = 0 32-bit ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data ** see http://blog.mongodb.org/post/137788967/32-bit-limitations for more Mon Apr 19 09:44:18 db version v1.4.0, pdfile version 4.5 Mon Apr 19 09:44:18 git version: nogitversion Mon Apr 19 09:44:18 sys info: Linux arch.local.net 2.6.33-ARCH #1 SMP PREEMPT Mon Apr 5 05:57:38 UTC 2010 i686 BOOST_LIB_VERSION=1_41 Mon Apr 19 09:44:18 waiting for connections on port 27017 Mon Apr 19 09:44:18 web admin interface listening on port 28017 I have created documents by console client: [demas@arch.local.net][~]% mongo MongoDB shell version: 1.4.0 url: test connecting to: test type "help" for help > db.some.find(); { "_id" : ObjectId("4bcbef3c3be43e9b7e04ef3d"), "name" : "mongo" } { "_id" : ObjectId("4bcbef423be43e9b7e04ef3e"), "x" : 3 } Now I am trying to work with MongoDb from Java: import com.mongodb.*; import java.net.UnknownHostException; public class test1 { public static void main(String[] args) { System.out.println("Start"); try { Mongo m = new Mongo("localhost", 27017); DB db = m.getDB("test"); DBCollection coll = db.getCollection("some"); coll.insert(makeDocument(10, "James", "male")); System.out.println("Finish"); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (MongoException ex) { ex.printStackTrace(); } } public static BasicDBObject makeDocument(int id, String name, String gender) { BasicDBObject doc = new BasicDBObject(); doc.put("id", id); doc.put("name", name); doc.put("gender", gender); return doc; } } But execution stops on line coll.insert(): [demas@arch.local.net][~/dev/study/java/mongodb]% javac test1.java [demas@arch.local.net][~/dev/study/java/mongodb]% java test1 Start There are not messages from mogodb server regarding accepted connection. Why?

    Read the article

  • Git tutorial: Understanding git pull and branches (using a specific example repo)

    - by dreftymac
    Backround: Suppose I have the following Git URLs (hosted on github) http://github.com/mikl/drupal.git git://github.com/mikl/drupal.git (Git read-only) I am interested in having a local copy of this repository so I can pratice working with branches in git and see how my local working tree can change depending on which branch I am working with. Questions: To get started, I set up a local directory and do git clone git://github.com/mikl/drupal.git ... Will this clone all of the branches? Or will it only clone master? The web front-end for github gives me a "drop down" menu that allows me to switch branches ... Does changing this drop-down actually change which branch I will be grabbing when I run git clone? If I want a new copy of this repository on my local machine, but I am interested in only two branches of this repository and I want to ignore all the rest, what command do I use to ensure I clone only those two branches and nothing else (assume one of the branches is master)?

    Read the article

  • HttpSendRequest not getting latest file from server

    - by Doug Kavendek
    I am having an issue with my HTTP requests in my app, such that if the remote file is the same size as the local file (even though its modified time is different, as its contents have been changed), attempts to download it return quickly and the newer file is not downloaded. In short, the process I am following is: Setting up an HTTP connection with the INTERNET_FLAG_RESYNCHRONIZE flag and calling HttpSendRequest(); then checking the HTTP status code and finding it to be "200". If the remote file is updated, but remains the same size as the local copy: The local file is unchanged after running the app. If I call HttpQueryInfo() with HTTP_QUERY_LAST_MODIFIED after sending the request, it gives me the actual last modified time of the server's file, which I can see is different from the local file I am trying to have it overwrite. If the remote file is updated, and the file size becomes different from the local copy: It is downloaded and overwrites the local copy as expected. Here's a fairly abridged version of the code, to cut out helpers and error checking: // szAppName = our app name HINTERNET hInternetHandle = InternetOpen( szAppName, INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0 ); // szServerName = our server name hInternetHandle = InternetConnect( hInternetHandle, szServerName, INTERNET_DEFAULT_HTTP_PORT, NULL, NULL, INTERNET_SERVICE_HTTP, NULL, 0 ); // szPath = the file to download LPCSTR aszDefault[2] = { "*/*", NULL }; DWORD dwFlags = 0 | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTP | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTPS | INTERNET_FLAG_KEEP_CONNECTION | INTERNET_FLAG_NO_AUTH | INTERNET_FLAG_NO_AUTO_REDIRECT | INTERNET_FLAG_NO_COOKIES | INTERNET_FLAG_NO_UI | INTERNET_FLAG_RESYNCHRONIZE; HINTERNET hHandle = HttpOpenRequest( hInternetHandle, "GET", szPath, NULL, NULL, aszDefault, dwFlags, 0 ); DWORD dwTimeOut = 10 * 1000; // In milliseconds InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_RECEIVE_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_SEND_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); DWORD dwRetries = 5; InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_RETRIES, &dwRetries, sizeof( dwRetries ) ); HttpSendRequest( hInternetHandle, NULL, 0, NULL, 0 ); Since I have found I can query the remote file's last modified time, and find it to be accurate, I know it's actually getting to the server. I thought that specifying INTERNET_FLAG_RESYNCHRONIZE would force the file to resynch if it's out of date. Do I have it all wrong? Is this just how it's supposed to work?

    Read the article

  • Is it possible to use Google Gears inside of another Firefox extension?

    - by Dmitry Nedbaylo
    Basically, i want to implement Offline/Online XUL application with ability to upload data to server. Yes, i know there is Mozilla Storage API, but it looks like it is much more easier with Gears to have local database and to upload local changes to server using WorkerPool. Without Gears, i have no ideas how to upload local changes to remote server. Any thoughts, friends? Thanks in advance for any help.

    Read the article

  • mod_wsgi | linux installation error

    - by MMRUser
    I'm getting the following error when I try to install mod_wsgi ./configure checking for apxs2... no checking for apxs... /usr/sbin/apxs checking Apache version... 2.2.3 configure: creating ./config.status config.status: creating Makefile make /usr/sbin/apxs -c -I/usr/local/include/python2.6 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.6/config -lpython2.6 -lpthread -ldl -lutil -lm /apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE64_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.6 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo sh: /apr-1/build/libtool: No such file or directory apxs:Error: Command failed with rc=8323072 . make: *** [mod_wsgi.la] Error 1 libtool is installed on my system.. mod_wsgi 3.2 *Apache 2.2* *Python 2.6*

    Read the article

  • django+mod_wsgi on virtualenv not working

    - by jwesonga
    I've just finished setting up a django app on virtualenv, deployment went smoothly using a fabric script, but now the .wsgi is not working, I've tried every variation on the internet but no luck. My .wsgi file is: import os import sys import django.core.handlers.wsgi # put the Django project on sys.path root_path = os.path.abspath(os.path.dirname(__file__) + '../') sys.path.insert(0, os.path.join(root_path, 'kcdf')) sys.path.insert(0, root_path) os.environ['DJANGO_SETTINGS_MODULE'] = 'kcdf.settings' application = django.core.handlers.wsgi.WSGIHandler() I keep getting the same error: [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] mod_wsgi (pid=16938): Exception occurred processing WSGI script '/home/kcdfweb/webapps/kcdf.web/releases/current/kcdf/apache/kcdf.wsgi'. [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] Traceback (most recent call last): [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 230, in __call__ [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] self.load_middleware() [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 33, in load_middleware [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] for middleware_path in settings.MIDDLEWARE_CLASSES: [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py", line 269, in __getattr__ [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] self._setup() [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py", line 40, in _setup [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] self._wrapped = Settings(settings_module) [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py", line 75, in __init__ [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] ImportError: Could not import settings 'kcdf.settings' (Is it on sys.path? Does it have syntax errors?): No module named kcdf.settings my virtual environment is on /home/user/webapps/kcdfweb my app is /home/user/webapps/kcdf.web/releases/current/project_name my wsgi file home/user/webapps/kcdf.web/releases/current/project_name/apache/project_name.wsgi

    Read the article

  • Ubuntu: pip not working with python3.4

    - by val_
    Trying to get pip working on my Ubuntu pc. pip seems to be working for python2.7, but not for others. Here's the problem: $ pip Traceback (most recent call last): File "/usr/local/bin/pip", line 9, in <module> load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File "/usr/local/lib/python3.4/dist-packages/setuptools-1.1.5-py3.4.egg /pkg_resources.py", line 357, in load_entry_point def get_entry_info(dist, group, name): File "/usr/local/lib/python3.4/dist-packages/setuptools-1.1.5-py3.4.egg/pkg_resources.py", line 2394, in load_entry_point break File "/usr/local/lib/python3.4/dist-packages/setuptools-1.1.5-py3.4.egg/pkg_resources.py", line 2108, in load name = some.module:some.attr [extra1,extra2] ImportError: No module named 'pip' $ which pip /usr/local/bin/pip $ python2.7 -m pip //here can be just python, btw Usage: /usr/bin/python2.7 -m pip <command> [options] //and so on... $ python3.4 -m pip /usr/bin/python3.4: No module named pip From the home/user/.pip/pip.log : Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1431, in install requirement.uninstall(auto_confirm=True) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 598, in uninstall paths_to_remove.remove(auto_confirm) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1836, in remove renames(path, new_path) File "/usr/lib/python2.7/dist-packages/pip/util.py", line 295, in renames shutil.move(old, new) File "/usr/lib/python2.7/shutil.py", line 303, in move os.unlink(src) OSError: [Errno 13] Permission denied: '/usr/bin/pip' There's no /usr/bin/pip btw. How can I fix this issue to work with pip and python 3.4 normally? I am trying to use pycharm, but it's package manager also stucks in this problem. Thanks for attention!

    Read the article

  • Why so Long time span in creating Session Factory?

    - by vijay.shad
    Hi My project is web application running in the tomcat container. This application is a spring framework based hibernate application. The problem with this is it takes a lot of time when creates session factory. here is the logs 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] Session factory constructed with filter configurations : {} 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] instantiating session factory with properties: {java.vendor=Sun Microsystems Inc., sun.java.launcher=SUN_STANDARD, catalina.base=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.management.compiler=HotSpot Tiered Compilers, catalina.useNaming=true, os.name=Linux, sun.boot.class.path=/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.jar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes, java.util.logging.config.file=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/conf/logging.properties, java.vm.specification.vendor=Sun Microsystems Inc., hibernate.generate_statistics=true, java.runtime.version=1.6.0_17-b04, hibernate.cache.provider_class=org.hibernate.cache.EhCacheProvider, user.name=root, shared.loader=, tomcat.util.buf.StringCache.byte.enabled=true, hibernate.connection.release_mode=auto, user.language=en, java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory, sun.boot.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386, java.version=1.6.0_17, java.util.logging.manager=org.apache.juli.ClassLoaderLogManager, user.timezone=Canada/Pacific, sun.arch.data.model=32, java.endorsed.dirs=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/endorsed, sun.cpu.isalist=, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans., file.separator=/, java.specification.name=Java Platform API Specification, java.class.version=50.0, user.country=US, java.home=/usr/java/jdk1.6.0_17/jre, java.vm.info=mixed mode, os.version=2.6.18-128.el5, path.separator=:, java.vm.version=14.3-b01, hibernate.jdbc.batch_size=25, java.awt.printerjob=sun.print.PSPrinterJob, sun.io.unicode.encoding=UnicodeLittle, package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper., java.naming.factory.url.pkgs=org.apache.naming, sun.rmi.dgc.client.gcInterval=3600000, user.home=/root, java.specification.vendor=Sun Microsystems Inc., java.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386/server:/usr/java/jdk1.6.0_17/jre/lib/i386:/usr/java/jdk1.6.0_17/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib, java.vendor.url=http://java.sun.com/, java.vm.vendor=Sun Microsystems Inc., hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect, sun.rmi.dgc.server.gcInterval=3600000, common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar, java.runtime.name=Java(TM) SE Runtime Environment, java.class.path=:/usr/local/InstalledPrograms/apache-tomcat-6.0.20/bin/bootstrap.jar, hibernate.bytecode.use_reflection_optimizer=false, java.vm.specification.name=Java Virtual Machine Specification, java.vm.specification.version=1.0, catalina.home=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.cpu.endian=little, sun.os.patch.level=unknown, hibernate.cache.use_query_cache=true, hibernate.connection.provider_class=org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider, java.io.tmpdir=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/temp, java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi, server.loader=, os.arch=i386, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, java.ext.dirs=/usr/java/jdk1.6.0_17/jre/lib/ext:/usr/java/packages/lib/ext, user.dir=/, line.separator=, java.vm.name=Java HotSpot(TM) Server VM, hibernate.cache.use_second_level_cache=true, file.encoding=UTF-8, java.specification.version=1.6, hibernate.show_sql=true} 2010-04-15 23:08:53,516 DEBUG [AbstractEntityPersister] Static SQL for entity: com.vsd.model.Order There you can see the time delay of more than 3 mins in executing these processes. My database is mysql and database server is running on the local machine only. The container environment is Centos Linux system. I am clueless about why it takes that much of time in executing these process, But when i do the same task from under eclipse it does not take that much of time. Development environment is Windows.

    Read the article

  • Cannot ping router with a static IP assigned?

    - by Uriah
    Alright. I am running Ubuntu LTS 12.04 and am trying to configure a local caching/master DNS server so I am using Bind9. First, here are some things via default DHCP: /etc/network/interfaces cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # The primary network interface - STATIC #auto eth0 #iface eth0 inet static # address 192.168.2.113 # netmask 255.255.255.0 # network 192.168.2.0 # broadcast 192.168.2.255 # gateway 192.168.2.1 # dns-search uclemmer.net # dns-nameservers 192.168.2.113 8.8.8.8 /etc/resolv.conf cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.2.1 search uclemmer.net ifconfig ifconfig eth0 Link encap:Ethernet HWaddr 00:14:2a:82:d4:9e inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::214:2aff:fe82:d49e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1067 errors:0 dropped:0 overruns:0 frame:0 TX packets:2504 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:153833 (153.8 KB) TX bytes:214129 (214.1 KB) Interrupt:23 Base address:0x8800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:915 errors:0 dropped:0 overruns:0 frame:0 TX packets:915 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:71643 (71.6 KB) TX bytes:71643 (71.6 KB) ping ping -c 4 192.168.2.1 PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data. 64 bytes from 192.168.2.1: icmp_req=1 ttl=64 time=0.368 ms 64 bytes from 192.168.2.1: icmp_req=2 ttl=64 time=0.224 ms 64 bytes from 192.168.2.1: icmp_req=3 ttl=64 time=0.216 ms 64 bytes from 192.168.2.1: icmp_req=4 ttl=64 time=0.237 ms --- 192.168.2.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2997ms rtt min/avg/max/mdev = 0.216/0.261/0.368/0.063 ms ping -c 4 google.com PING google.com (74.125.134.102) 56(84) bytes of data. 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=1 ttl=48 time=15.1 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=2 ttl=48 time=11.4 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=3 ttl=48 time=11.6 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=4 ttl=48 time=11.5 ms --- google.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 11.488/12.465/15.118/1.537 ms ip route ip route default via 192.168.2.1 dev eth0 metric 100 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.103 As you can see, with DHCP everything seems to work fine. Now, here are things with static IP: /etc/network/interfaces cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # The primary network interface - STATIC auto eth0 iface eth0 inet static address 192.168.2.113 netmask 255.255.255.0 network 192.168.2.0 broadcast 192.168.2.255 gateway 192.168.2.1 dns-search uclemmer.net dns-nameservers 192.168.2.1 8.8.8.8 I have tried dns-nameservers in various combos of *.2.1, *.2.113, and other reliable, public nameservers. /etc/resolv.conf cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.2.1 nameserver 8.8.8.8 search uclemmer.net Obviously, when I change the nameservers in the /etc/network/interfaces file, the nameservers change here too. ifconfig ifconfig eth0 Link encap:Ethernet HWaddr 00:14:2a:82:d4:9e inet addr:192.168.2.113 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::214:2aff:fe82:d49e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1707 errors:0 dropped:0 overruns:0 frame:0 TX packets:2906 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:226230 (226.2 KB) TX bytes:263497 (263.4 KB) Interrupt:23 Base address:0x8800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:985 errors:0 dropped:0 overruns:0 frame:0 TX packets:985 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:78625 (78.6 KB) TX bytes:78625 (78.6 KB) ping ping -c 4 192.168.2.1 PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data. --- 192.168.2.1 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3023ms ping -c 4 google.com ping: unknown host google.com Lastly, here are my bind zone files: /etc/bind/named.conf.options cat /etc/bind/named.conf.options options { directory "/etc/bind"; // // // query-source address * port 53; notify-source * port 53; transfer-source * port 53; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { // My local 192.168.2.113; // Comcast 75.75.75.75; 75.75.76.76; // Google 8.8.8.8; 8.8.4.4; // DNSAdvantage 156.154.70.1; 156.154.71.1; // OpenDNS 208.67.222.222; 208.67.220.220; // Norton 198.153.192.1; 198.153.194.1; // Verizon 4.2.2.1; 4.2.2.2; 4.2.2.3; 4.2.2.4; 4.2.2.5; 4.2.2.6; // Scrubit 67.138.54.100; 207.255.209.66; }; // // // //allow-query { localhost; 192.168.2.0/24; }; //allow-transfer { localhost; 192.168.2.113; }; //also-notify { 192.168.2.113; }; //allow-recursion { localhost; 192.168.2.0/24; }; //======================================================================== // If BIND logs error messages about the root key being expired, // you will need to update your keys. See https://www.isc.org/bind-keys //======================================================================== dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local cat /etc/bind/named.conf.local // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; zone "example.com" { type master; file "/etc/bind/zones/db.example.com"; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.2.168.192.in-addr.arpa"; /etc/bind/zones/db.example.com cat /etc/bind/zones/db.example.com ; ; BIND data file for example.com interface ; $TTL 604800 @ IN SOA yossarian.example.com. root.example.com. ( 1343171970 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS yossarian.example.com. @ IN A 192.168.2.113 @ IN AAAA ::1 @ IN MX 10 yossarian.example.com. ; yossarian IN A 192.168.2.113 router IN A 192.168.2.1 printer IN A 192.168.2.200 ; ns01 IN CNAME yossarian.example.com. www IN CNAME yossarian.example.com. ftp IN CNAME yossarian.example.com. ldap IN CNAME yossarian.example.com. mail IN CNAME yossarian.example.com. /etc/bind/zones/db.2.168.192.in-addr.arpa cat /etc/bind/zones/db.2.168.192.in-addr.arpa ; ; BIND reverse data file for 2.168.192.in-addr interface ; $TTL 604800 @ IN SOA yossarian.example.com. root.example.com. ( 1343171970 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS yossarian.example.com. @ IN A 255.255.255.0 ; 113 IN PTR yossarian.example.com. 1 IN PTR router.example.com. 200 IN PTR printer.example.com. ip route ip route default via 192.168.2.1 dev eth0 metric 100 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.113 I can SSH in to the machine locally at *.2.113 or at whatever address is dynamically assigned when in DHCP "mode". *2.113 is in my router's range and I have ports open and forwarding to the server. Pinging is enabled on the router too. I briefly had a static configuration working but it died after the first reboot. Please let me know what other info you might need. I am beyond frustrated/baffled.

    Read the article

  • changing default my.cnf path in mysql

    - by user377941
    I am having two mysql instances on same machine. The installations are on /usr/loca/mysql1 and /usr/local/mysql2. I m having separate my.cnf files located in /etc/mysql1 and /etc/mysql2. I installed the first instance of my sql using source distribution and with the --prefix=/usr/local/mysql1 option. The second one i got from copying and pastinf the same directory to /usr/local/mysql2. When i start the mysql daemon on /usr.local/mysql/libexec it reads the my.cnf file in /etc/mysql1. And if i start the mysql daemon in /usr/local/mysql2 it reads the same my.cnf file. I have separate port numbers and .sock files defined in the .cnf file in those 2 locations. I can read the my.cnf file in the second location by using --defaults-file=/etc/mysql2/my.cnf option on mysqld startup. I dnt need to enter this each and every time i start the daemon. If i am going to have more instances how can i point the correct my.cnf file to read to each and every mysql daemon. What is the retionale behind mysqld links with the my.cnf file. how can i predefine the location of my.cnf file for each instance.

    Read the article

  • python-iptables: Cryptic error when allowing incoming TCP traffic on port 1234

    - by Lucas Kauffman
    I wanted to write an iptables script in Python. Rather than calling iptables itself I wanted to use the python-iptables package. However I'm having a hard time getting some basic rules setup. I wanted to use the filter chain to accept incoming TCP traffic on port 1234. So I wrote this: import iptc chain = iptc.Chain(iptc.TABLE_FILTER,"INPUT") rule = iptc.Rule() target = iptc.Target(rule,"ACCEPT") match = iptc.Match(rule,'tcp') match.dport='1234' rule.add_match(match) rule.target = target chain.insert_rule(rule) However when I run this I get this thrown back at me: Traceback (most recent call last): File "testing.py", line 9, in <module> chain.insert_rule(rule) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1133, in insert_rule self.table.insert_entry(self.name, rbuf, position) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1166, in new obj.refresh() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1230, in refresh self._free() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1224, in _free self.commit() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1219, in commit raise IPTCError("can't commit: %s" % (self.strerror())) iptc.IPTCError: can't commit: Invalid argument Exception AttributeError: "'NoneType' object has no attribute 'get_errno'" in <bound method Table.__del__ of <iptc.Table object at 0x7fcad56cc550>> ignored Does anyone have experience with python-iptables that could enlighten on what I did wrong?

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • Tomcat 7 on Ubuntu 12.04 with JRE 7 not starting

    - by Andreas Krueger
    I am running a virtual server in the web on Ubuntu 12.04 LTS / 32 Bit. After a clean install of JRE 7 and Tomcat 7, following the instructions on http://www.sysadminslife.com, I don't get Tomcat 7 up and running. > java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) Client VM (build 23.5-b02, mixed mode) > /etc/init.d/tomcat start Starting Tomcat Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-7-oracle Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar > telnet localhost 8080 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused netstat sometimes shows a Java process, most of the times not. If it does, nothing works either. Does anyone have a solution or encountered similar situations? Here are the contents of catalina.out: 16.11.2012 18:36:39 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-oracle/lib/i386/client:/usr/lib/jvm/java-6-oracle/lib/i386:/usr/lib/jvm/java-6-oracle/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] 16.11.2012 18:36:40 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1509 ms 16.11.2012 18:36:40 org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina 16.11.2012 18:36:40 org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 16.11.2012 18:36:40 org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /usr/local/tomcat/webapps/manager Here come the results of ps -ef, iptables --list and netstat -plut: > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov16 ? 00:00:00 init root 2 1 0 Nov16 ? 00:00:00 [kthreadd/206616] root 3 2 0 Nov16 ? 00:00:00 [khelper/2066167] root 4 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 5 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 6 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 7 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 8 2 0 Nov16 ? 00:00:00 [nfsiod/2066167] root 119 1 0 Nov16 ? 00:00:00 upstart-udev-bridge --daemon root 125 1 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 157 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 158 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 205 1 0 Nov16 ? 00:00:00 upstart-socket-bridge --daemon root 276 1 0 Nov16 ? 00:00:00 /usr/sbin/sshd -D root 335 1 0 Nov16 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd root 348 1 0 Nov16 ? 00:00:00 cron syslog 368 1 0 Nov16 ? 00:00:00 /sbin/syslogd -u syslog root 472 1 0 Nov16 ? 00:00:00 /usr/lib/postfix/master postfix 482 472 0 Nov16 ? 00:00:00 qmgr -l -t fifo -u root 520 1 0 Nov16 ? 00:00:04 /usr/sbin/apache2 -k start www-data 523 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 525 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 526 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start tomcat 1074 1 0 Nov16 ? 00:01:08 /usr/lib/jvm/java-6-oracle/bin/java -Djava.util.logging.config.file=/usr/ postfix 1351 472 0 Nov16 ? 00:00:00 tlsmgr -l -t unix -u -c postfix 3413 472 0 17:00 ? 00:00:00 pickup -l -t fifo -u -c root 3457 276 0 17:31 ? 00:00:00 sshd: root@pts/0 root 3459 3457 0 17:31 pts/0 00:00:00 -bash root 3470 3459 0 17:31 pts/0 00:00:00 ps -ef > iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:8005 ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination > netstat -plut Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:smtp *:* LISTEN 472/master tcp 0 0 *:3213 *:* LISTEN 276/sshd tcp6 0 0 [::]:smtp [::]:* LISTEN 472/master tcp6 0 0 [::]:8009 [::]:* LISTEN 1074/java tcp6 0 0 [::]:3213 [::]:* LISTEN 276/sshd tcp6 0 0 [::]:http-alt [::]:* LISTEN 1074/java tcp6 0 0 [::]:http [::]:* LISTEN 520/apache2

    Read the article

  • Error trying to run rails server

    - by David87
    I am trying to get a basic Rails application to run on my Mac OS X 10.6.5. I created a new app called demo (rails new demo), then went into the demo directory and tried to start the app with rails server. Here is the error message I received: "/Users/dpetrovi/.gem/ruby/1.8/gems/sqlite3-ruby-1.3.2/lib/sqlite3/sqlite3_native.bundle: [BUG] Segmentation fault ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10] Abort trap" I checked bundle install in the demo folder: "Using rake (0.8.7) Using abstract (1.0.0) Using activesupport (3.0.3) Using builder (2.1.2) Using i18n (0.5.0) Using activemodel (3.0.3) Using erubis (2.6.6) Using rack (1.2.1) Using rack-mount (0.6.13) Using rack-test (0.5.6) Using tzinfo (0.3.23) Using actionpack (3.0.3) Using mime-types (1.16) Using polyglot (0.3.1) Using treetop (1.4.9) Using mail (2.2.13) Using actionmailer (3.0.3) Using arel (2.0.6) Using activerecord (3.0.3) Using activeresource (3.0.3) Using bundler (1.0.7) Using thor (0.14.6) Using railties (3.0.3) Using rails (3.0.3) Using sqlite3-ruby (1.3.2) Your bundle is complete! Use bundle show [gemname] to see where a bundled gem is installed." Ruby, RubyGems, and sqlite3 were installed using MacPorts. Then I used gem to try to install the sqlite3-ruby interface. (sudo gem install sqlite3-ruby). Here is where I first noticed something could be off: "Successfully installed sqlite3-ruby-1.3.2 1 gem installed Installing ri documentation for sqlite3-ruby-1.3.2... No definition for libversion Enclosing class/module 'mSqlite3' for class Statement not known Installing RDoc documentation for sqlite3-ruby-1.3.2... No definition for libversion Enclosing class/module 'mSqlite3' for class Statement not known " I had rails running well on my system a few months ago, so I figured maybe I had some duplicates and it was trying to use the wrong one. I ran: "for cmd in ruby irb gem rake; do which $cmd; done" and got: "/opt/local/bin/ruby /opt/local/bin/irb /opt/local/bin/gem /opt/local/bin/rake" Checking where sqlite3 also gets me: "/opt/local/bin/sqlite3" so they all seem to be in the right place. Obviously /opt/local/bin is in my system path. If I check gems server, it shows that I have installed sqlite3-ruby 1.3.2 gem. Not sure what the problem could be? I am using ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10]. Macports claims this is the latest (although ive seen 1.9.1) One more thing-- in irb, I tried to check which version of sqlite3 my sqlite3-ruby is bound to, but I can only get this far: ":irb(main):001:0 require 'rubygems' = true irb(main):002:0 require 'sqlite3' /Users/dpetrovi/.gem/ruby/1.8/gems/sqlite3-ruby-1.3.2/lib/sqlite3/sqlite3_native.bundle: [BUG] Segmentation fault ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10] Abort trap" Any suggestions? Im hoping I overlooked something obvious. Thanks

    Read the article

  • data directory in automake

    - by Alex Farber
    I have some data files that should be distributed with my program. Using dist_pkgdata_DATA in Makefile.am, I get these files installed to /usr/local/data/share/package-name. The problem is that data is read-only, and my program needs to modify it. Playing with dist_sharedstate_DATA, dist_localstate_DATA, dist-data_DATA varibles, I got different installation directories, like /usr/local/com, usr/local/var, but data is always read-only. How can I distribute modifiable data files with my package? I need some common directory for all users, or maybe local data in a user directory.

    Read the article

  • How to add the ImageMagick install to my path on Ubuntu

    - by Josh
    I have had been on a roller coaster trying to get ImageMagick to work on my Ubuntu slice. I Whenever I try to upload an image I get the following error: /tmp/stream.1170.0 is not recognized by the 'identify' command. If I type 'which identify' I get: /usr/local/bin/identify If I run '/usr/local/bin/identify' or just 'identify', I get the following error: /usr/local/bin/identify: error while loading shared libraries: libMagickCore.so.3: cannot open shared object file: No such file or directory If I run '/usr/bin/identify', ImageMagick is run just fine. How can I set my path to where when Paperclip runs the identify command, it points to /usr/bin/identify? Thanks. p.s. I have tried adding this to paperclip.rb: Paperclip.options[:command_path] = '/usr/bin' and Paperclip.options[:command_path] = '/usr/local/bin'

    Read the article

  • Curious about python installation paths, especially on OSX.

    - by chiggsy
    First: I'm running Macports. No problems with that, except: /opt/local/Library/Frameworks/Python.framework/Versions/2.6/bin which is the value of sys.exec_prefix, for my macports python even though: /opt/local/lib/python2.6/site-packages/ seems to be quite a logical place to put things, /opt/local being the macports --prefix, as it were. Why does easy_install put things in this odd Frameworks/Python.framework thing? More importantly, can i use the methods here, to ensure that all my systemwide python, particularly the scripts which I really want in /opt/local/bin, things I use all over the place like (i|b)python for example are accessible?

    Read the article

  • What permissions needed to connect to SQL Server Integration Services

    - by rwmnau
    I need to allow a consultant to connect to SSIS on a SQL Server 2008 box without making him a local administrator. If I add him to the local administrators group, he can connect to SSIS just fine, but it seems that I can't grant him enough permissions through SQL Server to give him these rights without being a local admin. I've added him to every role on the server, every database role in MSDB shy of DBO, and he's still not able to connect. I don't see any SSIS-related Windows groups on the server - Is membership in the Local Administrators group really required to connect to the SSIS instance on a SQL Server? It seems like there is somewhere I should be able to grant "SSIS Admin" rights to a user (even if it's a Windows account and not a SQL account), but I can't find that place. UPDATE: I've found an MSDN article (See the section titled "Eliminating the 'Access if Denied' Error") that describes how to resolve problem, but even after following the stepsI'm still not able to connect. Just wanted to add it to the discussion

    Read the article

  • Keeping a large volume of data in Session - Suggestions / alternatives?

    - by Fishcake
    I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance. So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits. If the user then looks at their contacts again the data will just come from the local DataTable. At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?

    Read the article

  • mysql5.58 unstart server in mac os 10.6.5

    - by EdwardLau
    MySQL 5.5.8 uninstall MAC OS 10.6.5,restart computer is message “/Library/StartupItems/MySQLCOM” has not been started because it does not have the proper security settings. i set sudo /Applications/TextEdit.app/Contents/MacOS/TextEdit /usr/local/mysql/support-files/mysql.server Locate the configuration defining the basedir and set the following : basedir=/usr/local/mysql datadir=/usr/local/mysql/data bug i click the mysql preference start mysql server isn't start and i sudo chown -R root:wheel /Library/StartupItems/MySQLCOM and restart again ,not warning message but mysql server not start ,why?

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >