Search Results

Search found 27637 results on 1106 pages for 'source packages'.

Page 230/1106 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

  • Accessing the Atlassian Crowd SOAP API with Suds (python SOAP library)

    - by SeanOC
    Has anybody had any recent success with accessing the Crowd SOAP API via the Suds Python library? I've found a few people successfully doing it in the past but Atlassian seems to have changed their WSDL since then to make the existing advice not entirely helpful. Below is the simplest example I've been trying: from suds.client import Client url = 'https://crowd.hugeinc.com/services/SecurityServer?wsdl' client = Client(url) Unfortunately that generates the following error: Traceback (most recent call last): File "<input>", line 1, in <module> File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/client.py", line 116, in __init__ sd = ServiceDefinition(self.wsdl, s) File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/servicedefinition.py", line 58, in __init__ self.paramtypes() File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/servicedefinition.py", line 137, in paramtypes item = (pd[1], pd[1].resolve()) File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/xsd/sxbasic.py", line 63, in resolve raise TypeNotFound(qref) TypeNotFound: Type not found: '(AuthenticatedToken, http://authentication.integration.crowd.atlassian.com, )' I've tried to both binding and doctors to fix this problem to no avail. Neither approach resulted in any change. Any further recommendations or suggestions would be incredibly helpful.

    Read the article

  • local variable 'sresult' referenced before assignment

    - by user288558
    I have had multiple problems trying to use PP. I am running python2.6 and pp 1.6.0 rc3. Using the following test code: import pp nodes=('mosura02','mosura03','mosura04','mosura05','mosura06', 'mosura09','mosura10','mosura11','mosura12') def pptester(): js=pp.Server(ppservers=nodes) tmp=[] for i in range(200): tmp.append(js.submit(ppworktest,(),(),('os',))) return tmp def ppworktest(): return os.system("uname -a") gives me the following result: In [10]: Exception in thread run_local: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 477, in run self.__target(*self.__args, **self.__kwargs) File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local job.finalize(sresult) UnboundLocalError: local variable 'sresult' referenced before assignment Exception in thread run_local: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 477, in run self.__target(*self.__args, **self.__kwargs) File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local job.finalize(sresult) UnboundLocalError: local variable 'sresult' referenced before assignment Exception in thread run_local: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 477, in run self.__target(*self.__args, **self.__kwargs) File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local job.finalize(sresult) UnboundLocalError: local variable 'sresult' referenced before assignment Exception in thread run_local: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 477, in run self.__target(*self.__args, **self.__kwargs) File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local job.finalize(sresult) UnboundLocalError: local variable 'sresult' referenced before assignment any help greatly appreciated

    Read the article

  • How do I stop Python install on Mac OS X from putting things in my home directory?

    - by Rob
    Hi, I'm trying to install Python from source on my Mac. (OS X 10.6.2, Python-2.6.5.tar.bz2) I've done this before and it was easy, but for some reason, this time after ./configure, and make, the sudo make install puts things some things in my home directory instead of in /usr/local/... where I expect. The .py files are okay, but not the .so files... RobsMac Python-2.6.5 $ sudo make install [...] /usr/bin/install -c -m 644 ./Lib/anydbm.py /usr/local/lib/python2.6 /usr/bin/install -c -m 644 ./Lib/ast.py /usr/local/lib/python2.6 /usr/bin/install -c -m 644 ./Lib/asynchat.py /usr/local/lib/python2.6 [...] running build_scripts running install_lib creating /Users/rob/Library/Python creating /Users/rob/Library/Python/2.6 creating /Users/rob/Library/Python/2.6/site-packages copying build/lib.macosx-10.4-x86_64-2.6/_AE.so - /Users/rob/Library/ Python/2.6/site-packages copying build/lib.macosx-10.4-x86_64-2.6/_AH.so - /Users/rob/Library/ Python/2.6/site-packages copying build/lib.macosx-10.4-x86_64-2.6/_App.so - /Users/rob/Library/ Python/2.6/site-packages [...] Later, this causes imports that require those .so files to fail. For example... RobsMac Python-2.6.5 $ python Python 2.6.5 (r265:79063, Apr 28 2010, 13:40:18) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import zlib Traceback (most recent call last):     File "", line 1, in ImportError: No module named zlib Any ideas what is wrong? thanks, Rob

    Read the article

  • Fitting Gaussian KDE in numpy/scipy in Python

    - by user248237
    I am fitting a Gaussian kernel density estimator to a variable that is the difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as: class gaussian_kde_covfact(stats.gaussian_kde): def __init__(self, dataset, covfact = 'scotts'): self.covfact = covfact scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance_(self): '''not used''' self.inv_cov = np.linalg.inv(self.covariance) self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n def covariance_factor(self): if self.covfact in ['sc', 'scotts']: return self.scotts_factor() if self.covfact in ['si', 'silverman']: return self.silverman_factor() elif self.covfact: return float(self.covfact) else: raise ValueError, \ 'covariance factor has to be scotts, silverman or a number' def reset_covfact(self, covfact): self.covfact = covfact self.covariance_factor() self._compute_covariance() This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error: File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance self.inv_cov = linalg.inv(self.covariance) File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else. thanks.

    Read the article

  • Django Admin Page missing CSS

    - by super9
    I saw this question and recommendation from Django Projects here but still can't get this to work. My Django Admin pages are not displaying the CSS at all. This is my current configuration. settings.py ADMIN_MEDIA_PREFIX = '/media/admin/' httpd.conf <VirtualHost *:80> DocumentRoot /home/django/sgel ServerName ec2-***-**-***-***.ap-**********-1.compute.amazonaws.com ErrorLog /home/django/sgel/logs/apache_error.log CustomLog /home/django/sgel/logs/apache_access.log combined WSGIScriptAlias / /home/django/sgel/apache/django.wsgi <Directory /home/django/sgel/media> Order deny,allow Allow from all </Directory> <Directory /home/django/sgel/apache> Order deny,allow Allow from all </Directory> LogLevel warn Alias /media/ /home/django/sgel/media/ </VirtualHost> <VirtualHost *:80> ServerName sgel.com Redirect permanent / http://www.sgel.com/ </VirtualHost> In addition, I also ran the following to create (I think) the symbolic link ln -s /home/djangotest/sgel/media/admin/ /usr/lib/python2.6/site-packages/django/contrib/admin/media/ UPDATE In my httpd.conf file, User django Group django When I run ls -l in my /media directory drwxr-xr-x 2 root root 4096 Apr 4 11:03 admin -rw-r--r-- 1 root root 9 Apr 8 09:02 test.txt Should that root user be django instead? UPDATE 2 When I enter ls -la in my /media/admin folder total 12 drwxr-xr-x 2 root root 4096 Apr 13 03:33 . drwxr-xr-x 3 root root 4096 Apr 8 09:02 .. lrwxrwxrwx 1 root root 60 Apr 13 03:33 media -> /usr/lib/python2.6/site-packages/django/contrib/admin/media/ The thing is, when I navigate to /usr/lib/python2.6/site-packages/django/contrib/admin/media/, the folder was empty. So I copied the CSS, IMG and JS folders from my Django installation into /usr/lib/python2.6/site-packages/django/contrib/admin/media/ and it still didn't work

    Read the article

  • installing paramiko

    - by fixxxer
    This may sound like a repeated question on SF, but I could not find a clear answer to it, yet.So. I installed Paramiko 1.7 with "setup.py install" command and while running the demo.py program, I got this error: Traceback (most recent call last): File "C:\Documents and Settings\fixavier\Desktop\paramiko-1.7\demos\demo.py", line 33, in <module> import paramiko File "C:\Python26\lib\site-packages\paramiko\__init__.py", line 69, in <module> from transport import randpool, SecurityOptions, Transport File "C:\Python26\lib\site-packages\paramiko\transport.py", line 32, in <module> from paramiko import util File "C:\Python26\lib\site-packages\paramiko\util.py", line 31, in <module> from paramiko.common import * File "C:\Python26\lib\site-packages\paramiko\common.py", line 99, in <module> from Crypto.Util.randpool import PersistentRandomPool, RandomPool ImportError: No module named Crypto.Util.randpool I'm getting this error even after installing PyCrypto 2.1. On running test.py(which comes with the installation), I got the following error - Traceback (most recent call last): File "C:\Documents and Settings\fixavier\Desktop\pycrypto-2.0.1\pycrypto-2.0.1\test.py", line 18, in <module> from Crypto.Util import test File "C:\Documents and Settings\fixavier\Desktop\pycrypto-2.0.1\pycrypto-2.0.1\build/lib.win32-2.6\Crypto\Util\test.py", line 17, in <module> import testdata File "C:\Documents and Settings\fixavier\Desktop\pycrypto-2.0.1\pycrypto-2.0.1\test\testdata.py", line 450, in <module> from Crypto.Cipher import AES ImportError: cannot import name AES I don't have the confidence to go ahead and install AES after all this, for all I know I may get another ImportError! Please advice.Is it the way of installation thats problematic?

    Read the article

  • How to Select Items in Dropdown in Selenium

    - by Marcus Gladir
    Firstly, I have been trying to get the dropdown from this web page: http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl This is the code I have: import urllib2 from bs4 import BeautifulSoup import re from pprint import pprint import sys from selenium import common from selenium import webdriver import selenium.webdriver.support.ui as ui from boto.s3.key import Key import requests url = 'http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl' element_xpath = '//*[@id="Component1"]' driver = webdriver.PhantomJS() driver.get(url) element = driver.find_element_by_xpath(element_xpath) element_xpath = '/option[@value="02"]' all_options = element.find_elements_by_tag_name("option") for option in all_options: print("Value is: %s" % option.get_attribute("value")) option.click() source = driver.page_source.encode('utf-8', 'ignore') driver.quit() source = str(source) soup = BeautifulSoup(source, 'html.parser') print soup What prints out is this: Traceback (most recent call last): File "../../../../test.py", line 58, in <module> Value is: XX main() File "../../../../test.py", line 46, in main option.click() File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 54, in click self._execute(Command.CLICK_ELEMENT) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 228, in _execute return self._parent.execute(command, params) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 165, in execute self.error_handler.check_response(response) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 158, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotVisibleException: Message: u'{"errorMessage":"Element is not currently visible and may not be manipulated","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"81","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:51413","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\\"sessionId\\": \\"30e4fd50-f0e4-11e3-8685-6983e831d856\\", \\"id\\": \\":wdc:1402434863875\\"}","url":"/click","urlParsed":{"anchor":"","query":"","file":"click","directory":"/","path":"/click","relative":"/click","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/click","queryKey":{},"chunks":["click"]},"urlOriginal":"/session/30e4fd50-f0e4-11e3-8685-6983e831d856/element/%3Awdc%3A1402434863875/click"}}' ; Screenshot: available via screen And the weirdest most infuriating bit of it all is that sometimes it actually all works out. I have no clue what's going on here.

    Read the article

  • Yahoo BOSS Python Library, ExpatError

    - by Wraith
    I tried to install the Yahoo BOSS mashup framework, but am having trouble running the examples provided. Examples 1, 2, 5, and 6 work, but 3 & 4 give Expat errors. Here is the output from ex3.py: gpython examples/ex3.py examples/ex3.py:33: Warning: 'as' will become a reserved keyword in Python 2.6 Traceback (most recent call last): File "examples/ex3.py", line 27, in <module> digg = db.select(name="dg", udf=titlef, url="http://digg.com/rss_search?search=google+android&area=dig&type=both&section=news") File "/usr/lib/python2.5/site-packages/yos/yql/db.py", line 214, in select tb = create(name, data=data, url=url, keep_standards_prefix=keep_standards_prefix) File "/usr/lib/python2.5/site-packages/yos/yql/db.py", line 201, in create return WebTable(name, d=rest.load(url), keep_standards_prefix=keep_standards_prefix) File "/usr/lib/python2.5/site-packages/yos/crawl/rest.py", line 38, in load return xml2dict.fromstring(dl) File "/usr/lib/python2.5/site-packages/yos/crawl/xml2dict.py", line 41, in fromstring t = ET.fromstring(s) File "/usr/lib/python2.5/xml/etree/ElementTree.py", line 963, in XML parser.feed(text) File "/usr/lib/python2.5/xml/etree/ElementTree.py", line 1245, in feed self._parser.Parse(data, 0) xml.parsers.expat.ExpatError: syntax error: line 1, column 0 It looks like both examples are failing when trying to query Digg.com. Here is the query that is constructed in ex3.py's code: diggf = lambda r: {"title": r["title"]["value"], "diggs": int(r["diggCount"]["value"])} digg = db.select(name="dg", udf=diggf, url="http://digg.com/rss_search?search=google+android&area=dig&type=both&section=news") Any help is appreciated. Thanks!

    Read the article

  • Distutils - Where Am I going wrong?

    - by RJBrady
    I wanted to learn how to create python packages, so I visited http://docs.python.org/distutils/index.html. For this exercise I'm using Python 2.6.2 on Windows XP. I followed along with the simple example and created a small test project: person/ setup.py person/ __init__.py person.py My person.py file is simple: class Person(object): def __init__(self, name="", age=0): self.name = name self.age = age def sound_off(self): print "%s %d" % (self.name, self.age) And my setup.py file is: from distutils.core import setup setup(name='person', version='0.1', packages=['person'], ) I ran python setup.py sdist and it created MANIFEST, dist/ and build/. Next I ran python setup.py install and it installed it to my site packages directory. I run the python console and can import the person module, but I cannot import the Person class. >>>import person >>>from person import Person Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name Person I checked the files added to site-packages and checked the sys.path in the console, they seem ok. Why can't I import the Person class. Where did I go wrong?

    Read the article

  • SQLAlchemy DetachedInstanceError with regular attribute (not a relation)

    - by haridsv
    I just started using SQLAlchemy and get a DetachedInstanceError and can't find much information on this anywhere. I am using the instance outside a session, so it is natural that SQLAlchemy is unable to load any relations if they are not already loaded, however, the attribute I am accessing is not a relation, in fact this object has no relations at all. I found solutions such as eager loading, but I can't apply to this because this is not a relation. I even tried "touching" this attribute before closing the session, but it still doesn't prevent the exception. What could be causing this exception for a non-relational property even after it has been successfully accessed once before? Any help in debugging this issue is appreciated. I will meanwhile try to get a reproducible stand-alone scenario and update here. Update: This is the actual exception message with a few stacks: File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 159, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 377, in get value = callable_(passive=passive) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/state.py", line 280, in __call__ self.manager.deferred_scalar_loader(self, toload) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/mapper.py", line 2323, in _load_scalar_attributes (state_str(state))) DetachedInstanceError: Instance <ReportingJob at 0xa41cd8c> is not bound to a Session; attribute refresh operation cannot proceed The partial model looks like this: metadata = MetaData() ModelBase = declarative_base(metadata=metadata) class ReportingJob(ModelBase): __tablename__ = 'reporting_job' job_id = Column(BigInteger, Sequence('job_id_sequence'), primary_key=True) client_id = Column(BigInteger, nullable=True) And the field client_id is what is causing this exception with a usage like the below: Query: jobs = session \ .query(ReportingJob) \ .filter(ReportingJob.job_id == job_id) \ .all() if jobs: # FIXME(Hari): Workaround for the attribute getting lazy-loaded. jobs[0].client_id return jobs[0] This is what triggers the exception later out of the session scope: msg = msg + ", client_id: %s" % job.client_id

    Read the article

  • virtualenv macosX --no-site-package ignored

    - by Tristram Gräbener
    Hello, I'm having problems with macOSX and virtualenv. It seems to ignore --no-site-package. Using exactly the same commands with linux (archlinux) it works. It it macOSX 10.5 with python 2.5 curl -o virtualenv.py 'http://bitbucket.org/ianb/virtualenv/raw/tip/virtualenv.py Create a new environment python virtualenv.py --no-site-packages foo New python executable in foo/bin/python Installing setuptools...........................done. Activate it source foo/bin/activate Try to install something in it. Despite virtualenv it looks for the system-wide install easy_install cherrypy Searching for cherrypy Best match: CherryPy 3.1.2 Adding CherryPy 3.1.2 to easy-install.pth file Using /Library/Python/2.5/site-packages Processing dependencies for cherrypy Finished processing dependencies for cherrypy Yet it doesn't find the module (foo)guidage-multimodal:~ tristram$ python Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cherrypy Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named cherrypy I tried PIP after looking at http://stackoverflow.com/questions/1382925/virtualenv-no-site-packages-and-pip-still-finding-global-packages However it fails installing psycopg2 (some problems with gcc). Also I would like to be able to have a setup.py (from distribute) that does the whole woork

    Read the article

  • "Exception: No extension found at None" when trying on use Selenium Firefox WebDriver on a Mac

    - by Gj
    Any ideas? In [1]: from selenium.firefox.webdriver import WebDriver In [2]: d=WebDriver() --------------------------------------------------------------------------- Exception Traceback (most recent call last) /usr/local/selenium-read-only/<ipython console> in <module>() /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/webdriver.pyc in __init__(self, profile, timeout) 48 profile = FirefoxProfile(name=profile) 49 if not profile: ---> 50 profile = FirefoxProfile() 51 self.browser.launch_browser(profile) 52 RemoteWebDriver.__init__(self, /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/firefox_profile.pyc in __init__(self, name, port, template_profile, extension_path) 72 73 if name == ANONYMOUS_PROFILE_NAME: ---> 74 self._create_anonymous_profile(template_profile) 75 self._refresh_ini() 76 else: /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/firefox_profile.pyc in _create_anonymous_profile(self, template_profile) 82 self._copy_profile_source(template_profile) 83 self._update_user_preference() ---> 84 self.add_extension(extension_zip_path=self.extension_path) 85 self._launch_in_silent() 86 /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/selenium-2.0_dev-py2.6.egg/selenium/firefox/firefox_profile.pyc in add_extension(self, force_create, extension_zip_path) 152 not os.path.exists(extension_source_path)): 153 raise Exception( --> 154 "No extension found at %s" % extension_source_path) 155 156 logging.debug("extension_source_path : %s" % extension_source_path) Exception: No extension found at None

    Read the article

  • HTTPSConnection module missing in Python 2.6 on CentOS 5.2

    - by d2kagw
    Hi guys, I'm playing around with a Python application on CentOS 5.2. It uses the Boto module to communicate with Amazon Web Services, which requires communication through a HTTPS connection. When I try running my application I get an error regarding HTTPSConnection being missing: "AttributeError: 'module' object has no attribute 'HTTPSConnection'" Google doesn't really return anything relevant, I've tried most of the solutions but none of them solve the problem. Has anyone come across anything like it? Here's the traceback: Traceback (most recent call last): File "./chatter.py", line 114, in <module> sys.exit(main()) File "./chatter.py", line 92, in main chatter.status( ) File "/mnt/application/chatter/__init__.py", line 161, in status cQueue.connect() File "/mnt/application/chatter/tools.py", line 42, in connect self.connection = SQSConnection(cConfig.get("AWS", "KeyId"), cConfig.get("AWS", "AccessKey")); File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/sqs/connection.py", line 54, in __init__ self.region.endpoint, debug, https_connection_factory) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 418, in __init__ debug, https_connection_factory) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 189, in __init__ self.refresh_http_connection(self.server, self.is_secure) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 247, in refresh_http_connection connection = httplib.HTTPSConnection(host) AttributeError: 'module' object has no attribute 'HTTPSConnection'

    Read the article

  • Cocoa App with Python extension which use Scipy -> ImportError: No module named scipy

    - by Snej
    Hi: I have installed Scipy (via macports) for Python on my Mac and it runs fine when running Python scripts. But now I'm using Scipy (via PyObjc) for calculations embedded in a Cocoa App frontend. The following error occurs: ImportError: No module named scipy I am using the "Python.framework" in XCode. Does anybody know why Scipy module is not found? I even added it manually to the module search path via sys.path.append("/opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/") EDIT: I found the problem myself. The path should be without "/scipy" at the end. But now I got an architecture problem: ImportError: dlopen(/opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so, 2): no suitable image found. Did find: /opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: mach-o, but wrong architecture EDIT 2: I checked the architectures: Yes, sure it is an architecture problem. But when I run: file /opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so I get a result Mach-O 64-bit bundle x86_64. And the Mac OS 10.6 PYTHON is: Mach-O universal binary with 3 architectures /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 /usr/bin/python (for architecture i386): Mach-O executable i386 /usr/bin/python (for architecture ppc7400): Mach-O executable ppc I build the XCode project as x86_64.

    Read the article

  • How do I recover from pushing a gitosis.conf file with parsing errors due to line breaks?

    - by Kasia
    I have successfully set up gitosis for an Android mirror (containing multiple git repositories). While adding a new .git path following writable= in gitosis.conf I managed to insert a few line breaks. Saved, committed and pushed to server when I received the following parsing error: Traceback (most recent call last): File "/usr/bin/gitosis-run-hook", line 8, in load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-run-hook')() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 24, in run return app.main() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 38, in main self.handle_args(parser, cfg, options, args) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 75, in handle_args post_update(cfg, git_dir) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 33, in post_update cfg.read(os.path.join(export, '..', 'gitosis.conf')) File "/usr/lib/python2.5/ConfigParser.py", line 267, in read self._read(fp, filename) File "/usr/lib/python2.5/ConfigParser.py", line 490, in _read raise e ConfigParser.ParsingError: File contains parsing errors: ./gitosis-export/../gitosis.conf (...) I have removed the line break and amendend the commit by git commit -m "fix linebreak" --amend However git push still yields the exact same error. It leads me to believe gitosis is preventing me from doing any further pushes. How do I recover from this?

    Read the article

  • BeautifulSoup HTMLParseError. What's wrong with this?

    - by user1915496
    This is my code: from bs4 import BeautifulSoup as BS import urllib2 url = "http://services.runescape.com/m=news/recruit-a-friend-for-free-membership-and-xp" res = urllib2.urlopen(url) soup = BS(res.read()) other_content = soup.find_all('div',{'class':'Content'})[0] print other_content Yet an error comes up: /Library/Python/2.7/site-packages/bs4/builder/_htmlparser.py:149: RuntimeWarning: Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help. "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.")) Traceback (most recent call last): File "web.py", line 5, in <module> soup = BS(res.read()) File "/Library/Python/2.7/site-packages/bs4/__init__.py", line 172, in __init__ self._feed() File "/Library/Python/2.7/site-packages/bs4/__init__.py", line 185, in _feed self.builder.feed(self.markup) File "/Library/Python/2.7/site-packages/bs4/builder/_htmlparser.py", line 150, in feed raise e I've let two other people use this code, and it works for them perfectly fine. Why is it not working for me? I have bs4 installed...

    Read the article

  • tracd server problems

    - by deddihp
    Hello, I got the following error while accessing tracd server, what's going on ? Thanks. [oke@localhost Trac-0.11.7]$ sudo tracd -p 8000 /home/deddihp/trac/ Server starting in PID 5082. Serving on 0.0.0.0:8000 view at http://127.0.0.1:8000/ ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 47804) Traceback (most recent call last): File "/usr/lib/python2.6/SocketServer.py", line 558, in process_request_thread self.finish_request(request, client_address) File "/usr/lib/python2.6/SocketServer.py", line 320, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.6/SocketServer.py", line 615, in __init__ self.handle() File "/usr/lib/python2.6/BaseHTTPServer.py", line 329, in handle self.handle_one_request() File "/usr/lib/python2.6/site-packages/Trac-0.11.7-py2.6.egg/trac/web/wsgi.py", line 194, in handle_one_request gateway.run(self.server.application) File "/usr/lib/python2.6/site-packages/Trac-0.11.7-py2.6.egg/trac/web/wsgi.py", line 94, in run response = application(self.environ, self._start_response) File "/usr/lib/python2.6/site-packages/Trac-0.11.7-py2.6.egg/trac/web/standalone.py", line 100, in __call__ return self.application(environ, start_response) File "/usr/lib/python2.6/site-packages/Trac-0.11.7-py2.6.egg/trac/web/main.py", line 346, in dispatch_request locale.setlocale(locale.LC_ALL, environ['trac.locale']) File "/usr/lib/python2.6/locale.py", line 513, in setlocale return _setlocale(category, locale) Error: unsupported locale setting ----------------------------------------

    Read the article

  • Cannot open a SQL2000 DTS package I imported into SQL2008

    - by RJ
    I am running into a problem trying to open a SQL2000 DTS package I imported into SQL2008. I set up a new server and installed a fresh install of SQL2008. The database I need to run is a SQL2000 database. I moved the database over with no problem but there are a few DTS packages that need to run in legacy on SQL2008. I exported the DTS packages I need out of SQL2000 and imported them successfully into SQL2008. My SQL2008 server is x64. I can see the DTS packages under Data Transformation Service in Legacy but when I try to open the package I get this message. "SQL Server 2000 DTS Designer components are required to edit DTS packages. Install the special web download, "SQL Server 2000 DTS Designer components" to use this feature. (Microsoft.SqlServer.DtsObjectExplorerUI)" I downloaded the components and installed them and still get this error. I researched and found an article about this not working on x64 so I have an x86 machine that I installed the SQL2008 tools and tried to open the package from there and got the same error. I have spent days on this and need help. Has anyone run across this and can tell me what to do. If you have solved this problem, please help me out. Thanks.

    Read the article

  • how to solve a weired swig python c++ interfacing type error

    - by user2981648
    I want to use swig to switch a simple cpp function to python and use "scipy.integrate.quadrature" function to calculate the integration. But python 2.7 reports a type error. Do you guys know what is going on here? Thanks a lot. Furthermore, "scipy.integrate.quad" runs smoothly. So is there something special for "scipy.integrate.quadrature" function? The code is in the following: File "testfunctions.h": #ifndef TESTFUNCTIONS_H #define TESTFUNCTIONS_H double test_square(double x); #endif File "testfunctions.cpp": #include "testfunctions.h" double test_square(double x) { return x * x; } File "swig_test.i" : /* File : swig_test.i */ %module swig_test %{ #include "testfunctions.h" %} /* Let's just grab the original header file here */ %include "testfunctions.h" File "test.py": import scipy.integrate import _swig_test print scipy.integrate.quadrature(_swig_test.test_square, 0., 1.) error info: UMD has deleted: _swig_test Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 523, in runfile execfile(filename, namespace) File "D:\data\haitaliu\Desktop\Projects\swig_test\Release\test.py", line 4, in <module> print scipy.integrate.quadrature(_swig_test.test_square, 0., 1.) File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 161, in quadrature newval = fixed_quad(vfunc, a, b, (), n)[0] File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 61, in fixed_quad return (b-a)/2.0*sum(w*func(y,*args),0), None File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 90, in vfunc return func(x, *args) TypeError: in method 'test_square', argument 1 of type 'double'

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • CodePlex Daily Summary for Monday, February 22, 2010

    CodePlex Daily Summary for Monday, February 22, 2010New ProjectsAVDB: System to keep track of orders and the inventory of televisions, DVDs, VCRs etcBooky: Booky is an online Bookmark Management Tool. Gear Up for Lord of the Rings Online (lotro): Windows utility for checking what your LOTRO character currently has equipped and figuring out gear you should get to improve your stats.GotSharp Extensions: GotSharp Extensions is a set of helpful classes and extension methods that can make your coding experience easier and cleaner. Halfwit: A minimalist WPF Twitter client.HOA Starter Kit: A community subdivision website starter kit. First draft.Lua For Irony: Project to define the Lua language using the Irony (http://irony.codeplex.com/) development kit. This work is based heavily on the work done for V...MimeCloud: Scalable .NET Digital Asset & Media Management: MimeCloud is a scalable digital asset library & media management toolset. Founded by Alex Norcliffe and Peter Miller Written by people who have b...Parallel Mandelbrot Set solver: Solving the Mandelbrot set using the Parallel class in .NET 4.0. Showing the resulting image in a WPF application. The solution file requires VS 2010.Pomogad - Pomodoro Windows Gadget: Você usa Pomodoro Technique? Não sabe o que é? Veja aqui http://www.pomodorotechnique.com Agora que você já sabe, que tal usar essa técnica? E p...PostCrap - flyweight .NET AOP post compiler: PostCrap is a flyweight attribute based aspect injection .NET post compiler It is written in C# and uses Mono.Cecil to modify assemblies and injec...Software + Service Reference Demo Kit: MS China Developer and Platform Evangelism team created an End-2-End demo for Software + Service. Yet Another SharePoint Tool: YEAST provides you with a simple to integrate approach to generating SharePoint solution packages as part of a Visual Studio project. Zen Coding Visual Studio Plugin: Zen Coding for Visual Studio is plugin for HTML and CSS hi-speed codingNew Releases.Net MSBuild Google Closure Compiler Task: .Net MSBuild Google Closure Compiler Task 1.1: - Corrected issue with regular expression source file and renamingdotNails: dotNails_0.5.9: NOTE - the latest source code has been moved to google code to take advantage of Mercurial source control - http://code.google.com/p/dotnails/sourc...EasyWFUnit: EasyWFUnit-2.2: Release 2.2 of EasyWFUnit, an extension library to support unit testing of Windows Workflow, includes a revised WinForm GUI Test Builder that utili...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite BETA2 (for .NET 4.0RC): Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC.FolderSize: FolderSize.Win32.1.0.3.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Fusion Charts Free for SharePoint: 1.3: Fix release for issue #11833 : Feature Must Be Activated on Root of Web Application.GotSharp Extensions: 1.0: First release, containing only a few extension methods for the System.String and System.IO.Stream classes, and a Range utility class.Jeremy's Experimental Repository: FluentValidation with IoC Sample: Sample code for the blog post Using FluentValidation with an IoC containerMiniTwitter: 1.08: MiniTwitter 1.08 更新内容 修正 自動更新が CodePlex の変更で動いていなかった問題を修正 自動更新に失敗すると落ちるバグを修正 通知領域アイコン右クリックで表示されるメニューが消えないバグを修正 変更 ハッシュタグの抽出条件を変更 API のエンドポイ...MSTS Editors & Tools: Simis Editor v0.3: Simis Editor v0.3 Enabled Edit > Undo and Edit > Redo. Undoing/redoing back to last saved state is identified as saved (no prompt on exit, etc.)....Parallel Mandelbrot Set solver: Alpha 1: First releaseParallelTasks: ParallelTasks 2.0 beta1: ParallelTasks 2.0 is a total re-write of the original version. Featuring improved performance and stability and a more consistent API.Personal Expense Tracker: Personal Expense Tracker v0.1 beta: This is the first beta release. Please provide me with your feedback.PostCrap - flyweight .NET AOP post compiler: PostCrap 1.0 AOP source and binaries: PostCrap 1.0 source and binaries (the unit test project contains sample interceptor attributes for exception handling & logging)Protoforma | Tactica Adversa: Skilful 0.1.3.276: AlphaRawr: Rawr 2.3.10: - More improvements to the default filters - Further improvement on avoiding useless gem swaps from the Optimizer. - Normal/Heroic ICC items shou...Reusable Library: v1.0.2: A collection of reusable abstractions for enterprise application developer.Sem.Sync: 2010-02-21 - Synchronization Manager - Beta: This release is not tested very well, so you should use this version only to evaluate new features. - Changed way of handling source-ids in order ...Survey - web survey & form engine: Survey 1.1.0: Release Survey v. 1.1.0.0 Major changes: - layout & graphics completely overhauled - several technical changes & repairs (e.g. matrix question iss...Yet Another SharePoint Tool: Version 1: Version 1Zeta Resource Editor: Release 2010-02-21: New source code release.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpyjQuery Library for SharePoint Web ServicesSharePoint ContribInfoServicepatterns & practices – Enterprise LibraryPHPExcel

    Read the article

  • Ops Center Solaris 11 IPS Repository Management: Using ISO Images

    - by S Stelting
    Please join us for a live WebEx presentation of this topic on Tuesday, November 20th at 9am MDT. Details for the call are provided below: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209834017&UID=1512096072&PW=NYTVlZTYxMzdm&RT=MiMxMQ%3D%3D Meeting password: oracle123 Call-in toll-free number: 1-866-682-4770 International numbers: http://www.intercall.com/oracle/access_numbers.htm Conference Code: 762 9343 # Security Code: 7777 # With Enterprise Manager Ops Center 12c, you can provision, patch, monitor and manage Oracle Solaris 11 instances. To do this, Ops Center creates and maintains a Solaris 11 Image Packaging System (IPS) repository on the Enterprise Controller. During the Enterprise Controller configuration, you can load repository content directly from Oracle's Support Web site and subsequently synchronize the repository as new content becomes available. Of course, you can also use Solaris 11 ISO images to create and update your Ops Center repository. There are a few excellent reasons for doing this: You're running Ops Center in disconnected mode, and don't have Internet access on your Enterprise Controller You'd rather avoid the bandwidth associated with live synchronization of a Solaris 11 package repository This demo will show you how to use Solaris 11 ISO images to set up and update your Ops Center repository. Prerequisites This tip assumes that you've already installed the Enterprise Controller on a Solaris 11 OS instance and that you're ready for post-install configuration. In addition, there are specific Ops Center and OS version requirements depending on which version of Solaris 11 you plan to install.You can get full details about the requirements in the Release Notes for Ops Center 12c update 2. Additional information is available in the Ops Center update 2 Readme document. Part 1: Using a Solaris 11 ISO Image to Create an Ops Center Repository Step 1 – Download the Solaris 11 Repository Image The Oracle Web site provides a number of download links for official Solaris 11 images. Among those links is a two-part downloadable repository image, which provides repository content for Solaris 11 SPARC and X86 architectures. In this case, I used the Solaris 11 11/11 image. First, navigate to the Oracle Web site and accept the OTN License agreement: http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html Next, download both parts of the Solaris 11 repository image. I recommend using the Solaris 11 11/11 image, and have provided the URLs here: http://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-ahttp://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-b Finally, use the cat command to generate an ISO image you can use to create your repository: # cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > sol-11-1111-repo-full.iso The process is very similar if you plan to set up a Solaris 11.1 release in Ops Center. In that case, navigate to the Solaris 11 download page, accept the license agreement and download both parts of the Solaris 11.1 repository image. Use the cat command to create a single ISO image for Solaris 11.1 Step 2 – Mount the Solaris 11 ISO Image in your Local Filesystem Once you have created the Solaris 11 ISO file, use the mount command to attach it to your local filesystem. After the image has been mounted, you can browse the repository from the ./repo subdirectory, and use the pkgrepo command to verify that Solaris 11 recognizes the content: Step 3 – Use the Image to Create your Ops Center Repository When you have confirmed the repository is available, you can use the image to create the Enterprise Controller repository. The operation will be slightly different depending on whether you configure Ops Center for Connected or Disconnected Mode operation.For connected mode operation, specify the mounted ./repo directory in step 4.1 of the configuration wizard, replacing the default Web-based URL. Since you're synchronizing from an OS repository image, you don't need to specify a key or certificate for the operation. For disconnected mode configuration, specify the Solaris 11 directory along with the path to the disconnected mode bundle downloaded by running the Ops Center harvester script: Ops Center will run a job to import package content from the mounted ISO image. A synchronization job can take several hours to run – in my case, the job ran for 3 hours, 22 minutes on a SunFire X4200 M2 server. During the job, Ops Center performs three important tasks: Synchronizes all content from the image and refreshes the repository Updates the IPS publisher information Creates OS Provisioning profiles and policies based on the content When the job is complete, you can unmount the ISO image from your Enterprise Controller. At that time, you can view the repository contents in your Ops Center Solaris 11 library. For the Solaris 11 11/11 release, you should see 8,668 packages and patches in the contents. You should also see default deployment plans for Solaris 11 provisioning. As part of the repository import, Ops Center generates plans and profiles for desktop, small and large servers for the SPARC and X86 architecture. Part 2: Using a Solaris 11 SRU to update an Ops Center Repository It's possible to use the same approach to upgrade your Ops Center repository to a Solaris 11 Support Repository Update, or SRU. Each SRU provides packages and updates to Solaris 11 - for example, SRU 8.5 provided the packaged for Oracle VM Server for SPARC 2.2 SRUs are available for download as ISO images from My Oracle Support, under document ID 1372094.1. The document provides download links for all SRUs which have been released by Oracle for Solaris 11. SRUs are cumulative, so later versions include the packages from earlier SRUs. After downloading an ISO image for an SRU, you can mount it to your local filesystem using a mount command similar to the one shown for Solaris 11 11/11. When the ISO image is mounted to the file system, you can perform the Add Content action from the Solaris 11 Library to synchronize packages and patches from the mounted image. I used the same mount point, so the repository URL was file://mnt/repo once again: After the synchronization of an SRU is complete, you can verify its content in the Solaris 11 library using the search function. The version pattern is 0.175.0.#, where the # is the same value as the SRU. In this example, I upgraded to SRU 1. The update job ran in just under 8 minutes, and a quick search shows that 22 software components were added to the repository: It's also possible to search for "Support Repository Update" to confirm the SRU was successfully added to the repository. Details on any of the update content are available by clicking the "View Details" button under the Packages/Patches entry.

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >