Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 150/719 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Django custom managers - how do I return only objects created by the logged-in user?

    - by Tom Tom
    I want to overwrite the custom objects model manager to only return objects a specific user created. Admin users should still return all objects using the objects model manager. Now I have found an approach that could work. They propose to create your own middleware looking like this: #### myproject/middleware/threadlocals.py try: from threading import local except ImportError: # Python 2.3 compatibility from django.utils._threading_local import local _thread_locals = local() def get_current_user(): return getattr(_thread_locals, 'user', None) class ThreadLocals(object): """Middleware that gets various objects from the request object and saves them in thread local storage.""" def process_request(self, request): _thread_locals.user = getattr(request, 'user', None) #### end And in the Custom manager you could call the get_current_user() method to return only objects a specific user created. class UserContactManager(models.Manager): def get_query_set(self): return super(UserContactManager, self).get_query_set().filter(creator=get_current_user()) Is this a good approach to this use-case? Will this work? Or is this like "using a sledgehammer to crack a nut" ? ;-) Just using: Contact.objects.filter(created_by= user) in each view doesn`t look very neat to me. EDIT Do not use this middleware approach !!! use the approach stated by Jack M. below After a while of testing this approach behaved pretty strange and with this approach you mix up a global-state with a current request. Use the approach presented below. It is really easy and no need to hack around with the middleware. create a custom manager in your model with a function that expects the current user or any other user as an input. #in your models.py class HourRecordManager(models.Manager): def for_user(self, user): return self.get_query_set().filter(created_by=user) class HourRecord(models.Model): #Managers objects = HourRecordManager() #in vour view you can call the manager like this and get returned only the objects from the currently logged-in user. hr_set = HourRecord.objects.for_user(request.user)

    Read the article

  • Problem sorting RSS feed by date using XSL

    - by Buckers
    I'm creating a website where I need to show the top 5 records from an RSS feed, and these need to be sorted by date and time. The date fields in the RSS feed are in the following format: "Mon, 16 Feb 2009 16:02:44 GMT" I'm having big problems getting the records to sort correctly - I've tried lots of different code examples I've seen, but none seem to sort the records correctly. The code for my XSL sheet is shown below, and the feed in question is here. Very grateful for anyones help!!! Thanks, Chris. XSL CODE: <xsl:stylesheet version="1.1" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:digg="http://digg.com//docs/diggrss/" xmlns:dc="http://purl.org/dc/elements/1.1/"> <xsl:template match="/"> <xsl:for-each select="//*[local-name()='item'][position() < 6]"> <p> <a> <xsl:attribute name="href"> <xsl:value-of select="*[local-name()='link']"/></xsl:attribute> <xsl:attribute name="target"> <xsl:text>top</xsl:text> </xsl:attribute> <xsl:value-of select="*[local-name()='title']"/> </a> <br/> <span class="smaller"><xsl:value-of select="*[local-name()='pubDate']" disable-output-escaping="yes"/></span> </p> </xsl:for-each>

    Read the article

  • Status of Data in Rollback of Large Transaction in SQL Server

    - by Lloyd Banks
    I have a data warehousing procedure that downloads and replaces dozens of tables from a linked server to a local database. Every once in a while, the code will get stuck on one of the tables on the linked server because the table on the linked server is in a state of transition. I am under the assumption that since the entire procedure is considered one transaction commit, when the procedure gets stuck, none of the changes made by the procudure so far would have committed. But the opposite seems to be true, tables that were "downloaded" before the procedure got stuck would have been updated with today's versions on the local server. Shouldn't SQL Server wait for the entire procedure to finish before the changes are durable? CREATE PROCEDURE MYIMPORT AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE1') DROP TABLE TABLE1 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE1 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE1') IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE2') DROP TABLE TABLE2 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE2 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE2') --IF THE PROCEDURE GETS STUCK HERE, THEN CHANGES TO TABLE1 WOULD HAVE BEEN MADE ON THE LOCAL SERVER WHILE NO CHANGES WOULD HAVE BEEN MADE TO TABLE3 ON THE LOCAL SERVER IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE3') DROP TABLE TABLE3 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE3 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE3') END

    Read the article

  • GlassFish Security Realm, Active Directory and Referral

    - by Allan Lykke Christensen
    I've setup up a Security Realm in Glassfish to authenticate against an Active Directory server. The configuration of the realm is as follows: Class Name: com.sun.enterprise.security.auth.realm.ldap.LDAPRealm JAAS context: ldapRealm Directory: ldap://172.16.76.10:389/ Base DN: dc=smallbusiness,dc=local search-filter: (&(objectClass=user)(sAMAccountName=%s)) group-search-filter: (&(objectClass=group)(member=%d)) search-bind-dn: cN=Administrator,CN=Users,dc=smallbusiness,dc=local search-bind-password: abcd1234! The realm is functional and I can log-in, but when ever I log in I get the following error in the log: SEC1106: Error during LDAP search with filter [(&(objectClass=group)(member=CN=Administrator,CN=Users,dc=smallbusiness,dc=local))]. SEC1000: Caught exception. javax.naming.PartialResultException: Unprocessed Continuation Reference(s); remaining name 'dc=smallbusiness,dc=local' at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2820) .... .... ldaplm.searcherror While searching for a solution I found that it was recommended to add java.naming.referral=follow to the properties of the realm. However, after I add this it takes 20 minutes for GlassFish to authenticate against Active Directory. I suspect it is a DNS problem on the Active Directory server. The Active Directory server is a vanilla Windows Server 2003 setup in a Virtual Machine. Any help/recommendation is highly appreciated!

    Read the article

  • Python libusb pyusb "mach-o, but wrong architecture"

    - by Jon
    I am having some trouble with the pyusb module. I have narrowed down the problem to a single line, and have created a small example script to replicate the error. #!/usr/bin/env python """ This module was created to isolate the problem in the pyusb package. Operating system: Mac OS 10.6.3 Python Version: 2.6.4 libusb 1.0.8 has been successfully installed using: sudo port install libusb I have also tried modifying /opt/local/etc/macports/macports.conf to force the i386 architecture instead of x86_64. """ from ctypes import * import ctypes.util libname = ctypes.util.find_library('usb-1.0') print 'libname: ', libname l = CDLL(libname, RTLD_GLOBAL) # RESULT: #libname: /usr/local/lib/libusb-1.0.dylib #Traceback (most recent call last): # File "./pyusb_problem.py", line 7, in <module> # l = CDLL(libname, RTLD_GLOBAL) # File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ctypes/__init__.py", line 353, in __init__ # self._handle = _dlopen(self._name, mode) #OSError: dlopen(/usr/local/lib/libusb-1.0.dylib, 10): no suitable image found. Did find: # /usr/local/lib/libusb-1.0.dylib: mach-o, but wrong architecture # End of File This same script runs on Ubuntu 10.04 successfully. I have tried building the libusb module (directly from source AND through macports) for 32-bit (i386) instead of x86_64 (default for OS 10.6), but I receive the same error. Thank-you in advance for your help!

    Read the article

  • git pull not working

    - by dorelal
    I am not using github. We have git setup on our machine. I created a branch from master called experiment. However when I am trying to do git pull I am getting following message. > git pull You asked me to pull without telling me which branch you want to merge with, and 'branch.experiment.merge' in your configuration file does not tell me either. Please specify which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. Here is result of git remote show origin > git remote show origin * remote origin Fetch URL: ssh://git.domain.com/var/git/app.git Push URL: ssh://git.domain.com/var/git/app.git HEAD branch: master Remote branches: experiment tracked master tracked Local branches configured for 'git pull': master merges with remote master Local refs configured for 'git push': experiment pushes to experiment (local out of date) master pushes to master (up to date) As I read the message above experiment is mapped to origin/experiment. And my local repository knows that it is out of date. Then why I am not able to do git pull?

    Read the article

  • How to Select Items in Dropdown in Selenium

    - by Marcus Gladir
    Firstly, I have been trying to get the dropdown from this web page: http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl This is the code I have: import urllib2 from bs4 import BeautifulSoup import re from pprint import pprint import sys from selenium import common from selenium import webdriver import selenium.webdriver.support.ui as ui from boto.s3.key import Key import requests url = 'http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl' element_xpath = '//*[@id="Component1"]' driver = webdriver.PhantomJS() driver.get(url) element = driver.find_element_by_xpath(element_xpath) element_xpath = '/option[@value="02"]' all_options = element.find_elements_by_tag_name("option") for option in all_options: print("Value is: %s" % option.get_attribute("value")) option.click() source = driver.page_source.encode('utf-8', 'ignore') driver.quit() source = str(source) soup = BeautifulSoup(source, 'html.parser') print soup What prints out is this: Traceback (most recent call last): File "../../../../test.py", line 58, in <module> Value is: XX main() File "../../../../test.py", line 46, in main option.click() File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 54, in click self._execute(Command.CLICK_ELEMENT) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 228, in _execute return self._parent.execute(command, params) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 165, in execute self.error_handler.check_response(response) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 158, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotVisibleException: Message: u'{"errorMessage":"Element is not currently visible and may not be manipulated","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"81","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:51413","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\\"sessionId\\": \\"30e4fd50-f0e4-11e3-8685-6983e831d856\\", \\"id\\": \\":wdc:1402434863875\\"}","url":"/click","urlParsed":{"anchor":"","query":"","file":"click","directory":"/","path":"/click","relative":"/click","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/click","queryKey":{},"chunks":["click"]},"urlOriginal":"/session/30e4fd50-f0e4-11e3-8685-6983e831d856/element/%3Awdc%3A1402434863875/click"}}' ; Screenshot: available via screen And the weirdest most infuriating bit of it all is that sometimes it actually all works out. I have no clue what's going on here.

    Read the article

  • Retain count problem iphone sdk

    - by neha
    Hi all, I'm facing a memory leak problem which is like this: I'm allocating an object of class A in class B. // RETAIN COUNT OF CLASS A OBJECT BECOMES 1 I'm placing the object in an nsmutablearray. // RETAIN COUNT OF CLASS A OBJECT BECOMES 2 In an another class C, I'm grabbing this nsmutablearray, fetching all the elements in that array in a local nsmutablearray, releasing this first array of class B. // RETAIN COUNT OF CLASS A OBJECTS IN LOCAL ARRAY BECOMES 1 Now in this class C, I'm creating an object of class A and fetching the elements in local nsmutable array. //RETAIN COUNT OF NEW CLASS A OBJECT IN LOCAL ARRAY BECOMES 2 [ALLOCATION + FETCHED OBJECT WITH RETAIN COUNT 1] My question is, I want to retain this array which I'm displaying in tableview, and want to release it after new elements are filled in the array. I'm doing this in class B. So before adding new elements, I'm removing all the elements and releasing this array in class B. And in class C I'm releasing object of class A in dealloc. But in Instruments-Leaks it's showing me leak for this class A object in class C. Can anybody please tell me wheather where I'm going wrong. Thanx in advance.

    Read the article

  • GHC 6.12 and MacPorts

    - by absz
    I recently installed GHC 6.12 and the Haskell Platform 2010.1.0.1 on my Intel MacBook running OS X 10.5.8, and initially, everything worked fine. However, I discovered that if I use cabal install to install a package which depends on a MacPorts library (e.g., cabal install --extra-lib-dirs=/opt/local/lib --extra-include-dirs=/opt/local/include gd), things work fine in GHCi, but if I try to compile, I get the error Linking test ... Undefined symbols: "_iconv_close", referenced from: _hs_iconv_close in libHSbase-4.2.0.0.a(iconv.o) "_iconv", referenced from: _hs_iconv in libHSbase-4.2.0.0.a(iconv.o) "_iconv_open", referenced from: _hs_iconv_open in libHSbase-4.2.0.0.a(iconv.o) ld: symbol(s) not found collect2: ld returned 1 exit status After some Googling, I found a long Haskell-cafe thread discussing this problem. The upshot seems to be that MacPorts installs an updated version of libiconv, and the binary interface is slightly different from the version included with the system. Consequently, if you try to link with any MacPorts library, the MacPorts libiconv gets linked in too; and since the base library was built to link against a different version of libiconv, things break. I've tried setting LD_LIBRARY_PATH and DYLD_LIBRARY_PATH and adding more flags to try to get it to look at /usr/lib again (e.g. cabal install --extra-lib-dirs=/opt/local/lib --extra-include-dirs=/opt/local/include --extra-lib-dirs=/usr/lib --extra-include-dirs=/usr/include gd), but neither worked. Uninstalling the MacPorts libiconv isn't really an option, since I have a bunch of ports installed which depend on it---including some ports I want Haskell to link to, like gd2. From what I've seen online, the upshot really seems to be "you're boned": you cannot link against any MacPorts library while compiling with GHC, and there doesn't seem to be a solution. However, that thread was from the end of 2009, so I figure there's a chance that someone has a solution, workaround, ridiculous hack… anything, really. So: does anybody know how to get GHC 6.12 to link against the system libiconv at the same time as it links to libraries from MacPorts? Or, failing that, a way to make linking not break in some other clever way?

    Read the article

  • maven dependencies and jetty - avoiding deploy

    - by James Cooper
    Hi, I have a project with 3 artifacts: common - entities, business logic. no UI code webapp-a - a public web app webapp-b - an admin web app webapp-a and webapp-b depend on common. common is configured to deploy to a local maven repo. so far so good. I have IntelliJ configured so that each artifact is a separate module. Module dependencies are configured properly. I can add a new method to a class in common and immediately use that method in a class in a webapp. However, when I run mvn jetty:run it uses the currently deployed common snapshot in my repository. It does not use my local classes. If I add a method to a class in common, it compiles fine, but blows up at runtime. So is it possible to either: a) Convince jetty:run to use my local common build output or b) Deploy my common output to my local ~/.m2/repo while I'm testing locally before I want to commit/deploy or c) some other solution? thank you! -- James

    Read the article

  • MPI hypercube broadcast error

    - by luvieere
    I've got a one to all broadcast method for a hypercube, written using MPI: one2allbcast(int n, int rank, void *data, int count, MPI_Datatype dtype) { MPI_Status status; int mask, partner; int mask2 = ((1 << n) - 1) ^ (1 << n-1); for (mask = (1 << n-1); mask; mask >>= 1, mask2 >>= 1) { if (rank & mask2 == 0) { partner = rank ^ mask; if (rank & mask) MPI_Recv(data, count, dtype, partner, 99, MPI_COMM_WORLD, &status); else MPI_Send(data, count, dtype, partner, 99, MPI_COMM_WORLD); } } } Upon calling it from main: int main( int argc, char **argv ) { int n, rank; MPI_Init (&argc, &argv); MPI_Comm_size (MPI_COMM_WORLD, &n); MPI_Comm_rank (MPI_COMM_WORLD, &rank); one2allbcast(floor(log(n) / log (2)), rank, "message", sizeof(message), MPI_CHAR); MPI_Finalize(); return 0; } compiling and executing on 8 nodes, I receive a series of errors reporting that processes 1, 3, 5, 7 were stopped before the point of receiving any data: MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD) Rank (1, MPI_COMM_WORLD): Call stack within LAM: Rank (1, MPI_COMM_WORLD): - MPI_Recv() Rank (1, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 3, MPI_COMM_WORLD) Rank (3, MPI_COMM_WORLD): Call stack within LAM: Rank (3, MPI_COMM_WORLD): - MPI_Recv() Rank (3, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 5, MPI_COMM_WORLD) Rank (5, MPI_COMM_WORLD): Call stack within LAM: Rank (5, MPI_COMM_WORLD): - MPI_Recv() Rank (5, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 7, MPI_COMM_WORLD) Rank (7, MPI_COMM_WORLD): Call stack within LAM: Rank (7, MPI_COMM_WORLD): - MPI_Recv() Rank (7, MPI_COMM_WORLD): - main() Where do I go wrong?

    Read the article

  • WTK emulator bluetooth connection problem

    - by Gokhan B.
    Hi! I'm developing a J2ME program with eclipse / WTK 2.5.2 and having problem with connecting two emulators using bluetooth. There is one server and one .client running on two different emulators. The problem is client program cannot discover any bluetooth device. Here is the server and client codes: public Server() { try { LocalDevice local = LocalDevice.getLocalDevice(); local.setDiscoverable(DiscoveryAgent.GIAC); server = (StreamConnectionNotifier) Connector.open("btspp://localhost:" + UUID_STRING + ";name=" + SERVICE_NAME); Util.Log("EchoServer() Server connector open!"); } catch (Exception e) {} } after calling Connector.open, I get following warning in console, which i believe is related: Warning: Unregistered device: unspecified and client code that searches for devices: public SearchForDevices(String uuid, String nm) { UUIDStr = uuid; srchServiceName = nm; try { LocalDevice local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); deviceList = new Vector(); agent.startInquiry(DiscoveryAgent.GIAC, this); // non-blocking } catch (Exception e) {} } system never calls deviceDiscovered, but calls inquiryCompleted() with INQUIRY_COMPLETED paramter, so I suppose client program runs fine. Bluetooth is enabled at emulator settings.. any ideas ?

    Read the article

  • WCF identity when moving from dev to prod. environment

    - by Anders Abel
    I have a web service developed with WCF. In the development environment the endpoint has the following identity section under the endpoint configuration. <identity> <dns value="myservice.devdomain.local" /> </identity> myservice.devdomain.local is the dns name used to reach the development version of the service. The binding used is: <basicHttpBinding> <binding name ="myBinding"> <security mode ="TransportCredentialOnly"> <transport clientCredentialType="Windows"/> </security> </binding> </basicHttpBinding> I am about to put this into production. The binding will be the same, but the address will be a new production address myservice.proddomain.local. I have planned to change the dns value in the configuration to myservice.proddomain.local in the production environment. However this MSDN article on WCF Identity makes me worried about the impact on the clients when I change the identity. There are two clients - one .NET and one Java using this service. Both of those have been developed against the dev instance of the service. The idea is to just reconfigure the endpoint used by the clients, without reloading the WSDL. But if the identity is somehow part of the WSDL and the identity changes when deploying to prod that might not work. Will the new identity in the prod version cause issues for the clients that were developed using the dev wsdl? Do the Java and the .NET clients handle this differently?

    Read the article

  • Different location of assemblies stoped the type casting.

    - by smwikipedia
    I am writing a custom Control class in C# for my main project. There're 2 projects, one for my Control and one for my main project. These 2 projects are in the same solution. I add a reference from my main project to my Control project. I notice that the first time after I drag my Control from the Tool Panel onto my main winform, an assembly folder was generated at the C:\Users\XXX\AppData\Local\Microsoft\VisualStudio\9.0\ProjectAssemblies, and the folder name is something like "jlebh-py01". The first build is always OK, but after I rebuild my Control class or whole solution, a new assembly folder will be generated at C:\Users\XXX\AppData\Local\Microsoft\VisualStudio\9.0\ProjectAssemblies, and then problem arises, my Control fails to behave well because Visual Studio says that the two types "originates from different location". The error message is as below: [A]MyControl.TypeXXX cannot be cast to [B]MyControl.TypeXXX. Type A orginates from assemblyXXX at location 'C:\Users\XXX\AppData\Local\Microsoft\VisualStudio\9.0\ProjectAssemblies\jlebh-py01\MyControl.dll' Type B originats from assemblyXXX at location 'C:\Users\XXX\AppData\Local\Microsoft\VisualStudio\9.0\ProjectAssemblies\ue4i-z3j01\MyControl.dll' If I reference the Control DLL directly instead of through project reference, or never rebuild the Control project after use my Control in the main project, things seem to be OK. Does anyone knows why? Is it the proper way to develop a control and a main project within the same solution? Many thanks...

    Read the article

  • Conflict When Two Storyboards Sets the Opacity Property?

    - by kennethkryger
    Hi, Background: I have a WPF UserControl (MainControl - not shown in code below) that contains another one (called MyControl in the code below). MainControl has it's DataContext set to an object, that has a Project-property. When MainControl loads, the Project-property is always null. The problem: When MainControl loads, I want to fade in the MyControl using a special storyboard (only used this one time (this "specialFadeInStoryboard" changes Opacity-property of MyControl from 0 to 1). When the Project-property is set to a value other than null, I want the MyControl to fade out using the "fadeOutStoryboard" (changes Opacity-property of MyControl to 0) and if it's set to null afterwards I want to fade it in again this time using the "fadeInStoryboard" (changes Opacity-property of MyControl to 1). However, after adding the code for the "specialFadeInStoryboard", the MyControl is never faded out... What am I doing wrong? <local:MyControl Visibility="{Binding RelativeSource={RelativeSource Self}, Path=Opacity, Converter={StaticResource opacityToVisibilityConverter}, Mode=OneWay}"> <local:MyControl.Style> <Style> <Style.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource specialFadeInStoryboard}"/> </EventTrigger> <DataTrigger Binding="{Binding Project, Converter={StaticResource nullToBooleanConverter}, Mode=OneWay}" Value="True"> <DataTrigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource fadeOutStoryboard}"/> </DataTrigger.EnterActions> <DataTrigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource fadeInStoryboard}"/> </DataTrigger.ExitActions> </DataTrigger> </Style.Triggers> </Style> </local:MyControl.Style> </local:MyControl>

    Read the article

  • python protobufs - avoid the install step ?

    - by orion elenzil
    i'm writing a small python utility which will be consumed by moderately non-technical users and which needs to interface w/ some protobufs. ideally, i would like the only prerequisites to using this on a local machine to be: have python installed * have an SVN checkout of the repository * run a simple bash script to build the local proto .py definitions * run "python myutility" i'm running into trouble around importing descriptor_pb2.py, tho. i've seen Why do I see "cannot import name descriptor_pb2" error when using Google Protocol Buffers? , but would like to avoid adding the additional prerequisite of having run the proto SDK installer. i've modified the bash script to also generate descriptor_pb2.py in the local heirarchy, which works for the first level of imports from my other _pb2.py files, but it looks like descriptor_pb2.py itself tries to import descriptor_pb2 can't find it: $ python myutility.py Traceback (most recent call last): File "myutility.py", line 4, in <module> import protos.myProto_pb2 File "/myPath/protos/myProto_pb2.py", line 8, in <module> from google.protobuf import descriptor_pb2 File "/myPath/google/protobuf/descriptor_pb2.py", line 8, in <module> from google.protobuf import descriptor_pb2 ImportError: cannot import name descriptor_pb2 my local folder looks like: * myutility.py * google/ * protobuf/ * descriptor.py * descriptor_pb2.py * protos * myProto_ob2.py also, i'm a python n00b, so it's possible i'm overlooking something obvious. tia, orion

    Read the article

  • Git push current branch to a remote with Heroku

    - by cmaughan
    I'm trying to create a staging branch on Heroku, but there's something I don't quite get. Assuming I've already created a heroku app and setup the remote to point to staging-remote, If I do: git checkout -b staging staging-remote/master I get a local branch called 'staging' which tracks staging-remote/master - or that's what I thought.... But: git remote show staging-remote Gives me this: remote staging Fetch URL: [email protected]:myappname.git Push URL: [email protected]:myappname.git HEAD branch: master Remote branch: master tracked Local branch configured for 'git pull': staging-remote merges with remote master Local ref configured for 'git push': master pushes to master (up to date) As you can see, the pull looks reasonable, but the default push does not. It implies that if I do: git push staging-remote I'm going to push my local master branch up to the staging branch. But that's not what I want.... Basically, I want to merge updates into my staging branch, then easily push it to heroku without having to specify the branch like so: git push staging-remote mybranch:master The above isn't hard to do, but I want to avoid accidentally doing the previous push and pushing the wrong branch... This is doubly important for the production branch I'd like to create! I've tried messing with git config, but haven't figured out how to get this right yet...

    Read the article

  • Including variables inside curly braces in a Zend config ini file on Linux

    - by Dave Morris
    I am trying to include a variable in a .ini file setting by surrounding it with curly braces, and Zend is complaining that it cannot parse it properly on Linux. It works properly on Windows, though: welcome_message = Welcome, {0}. This is the error that is being thrown on Linux: : Uncaught exception 'Zend_Config_Exception' with message 'Error parsing /var/www/html/portal/application/configs/language/messages.ini on line 10 ' in /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php:181 Stack trace: 0 /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php(201): Zend_Config_Ini-&gt;_parseIniFile('/var/www/html/p...') 1 /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php(125): Zend_Config_Ini-&gt;_loadIniFile('/var/www/html/p...') 2 /var/www/html/portal/library/Ingrain/Language/Base.php(49): Zend_Config_Ini-&gt;__construct('/var/www/html/p...', NULL) 3 /var/www/html/portal/library/Ingrain/Language/Base.php(23): Ingrain_Language_Base-&gt;setConfig('messages.ini', NULL, NULL) 4 /var/www/html/portal/library/Ingrain/Language/Messages.php(7): Ingrain_Language_Base-&gt;__construct('messages.ini', NULL, NULL, NULL) 5 /var/www/html/portal/library/Ingrain/Helper/Language.php(38): Ingrain_Language_Messages-&gt;__construct() 6 /usr/local/zend/share/ZendFramework/library/Zend/Contr in We are able to get the error to go away on Linux if we surround the braces with quotes, but that seems like a strange solution: welcome_message = Welcome, "{"0"}". Is there a better way to solve this issue for all platforms? Thanks for your help, Dave

    Read the article

  • Installing mongrel service on Windows 2008

    - by akirekadu
    We use InstallAnywhere to install our product. One of the components that it needs to install is mongrel. IA invokes the following command line during installation: mongrel_rails service::install -N service-1 -D "Service 1" -c "C:\app_dir\\rails\rails_apps\service-1" -p 19000 -e production Apprently under the hoods "sc create..." is used. The installation works great on Windows 2003. On Windows 2008 though this operation requires elevated privileges. When I login as local administrator (ie 'local-machine\administrator' user), the installation works just fine. However, when I login as a domain user that is part of local administrators group, the services fails to install with error "access is denied". How can I make it possible to install the product without having to login as local administrator? Thanks! Couple of notes I would like to add. One solution I tried is to execute the installer as administrator. The service does get installed. However, it creates another problem. An embedded 3rd party product and its files get installed with admin only rights. So we do need to run the installer as logged in user.

    Read the article

  • autoconf libtool library linker path incorrect (need drive-letter) for MinGW ld.exe in Cygwin

    - by Tam Toucan
    I use autoconf and when the target is mingw I was using the -mno-cygwin flag. This has been removed so I'm trying to using the mingw tool chain. The problem is the linker isn't finding my libraries /bin/sh ../../../libtool --tag=CXX --mode=link mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -L/usr/local/lib/w32api -o testRandom.exe testRandom.o -L../../../lib/Random -lRandom libtool: link: mingw32-g++ -g -Wall -pedantic -DNOMINMAX -D_REENTRANT -DWIN32 -I /usr/local/include/w32api -o .libs/testRandom.exe testRandom.o -L/usr/local/lib/w32api -L/home/Tam/src/3DS_Games/lib/Random -lRandom D:\cygwin\opt\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot find -lRandom To link this from the command line using the mingw linker the -L path needs the drive letter i.e mingw32-ld testRandom.o -LD:/home/Tam/src/3DS_Games/lib/Random -lRandom works. The -L path is generated from the makefile.am's which have LDADD = -L$(top_builddir)/lib/Random -lRandom However I can't find how to set top_builddir to a relative path or to start it with the drive letter (my autoconf skills are weak). As a tempoary "solution" I have removed the use of libtool. I could hack a $(DRIVE_LETTER) infront of every -L option, but I'd like to find something better.

    Read the article

  • Error 1053: the service did not respond to the start or control request in a timely fashion

    - by deejjaayy
    i know this is very much a "how long is a piece of string" type of question, however i have recently inherited a couple of applications that run as windows services, and i am having problems providing a gui (accessible from a context menu in system tray) with both of them. before you ask, the reason why we need a gui for a windows service is in order to be able to re-configure the behaviour of the windows service(s) without resorting to stopping/re-starting. my code works fine in debug mode, and i get the context menu come up, and everything behaves correctly etc. when i install the service via "installutil" using a named account (i.e., not Local System Account), the service runs fine, but doesn't display the icon in the system tray (i know this is normal behaviour because i don't have the "interact with desktop" option). here is the problem though - when i choose the "LocalSystemAccount" option, and check the "interact with desktop" option, the service takes AGES to start up for no obvious reason, and i just keep getting "Could not start the ... service on Local Computer. Error 1053: the service did not respond to the start or control request in a timely fashion". incidentally, i increased the windows service timeout from the default 30 seconds to 2 minutes via a registry hack (see http://support.microsoft.com/kb/824344, search for TimeoutPeriod in section 3), however the service start up still times out. my first question is - why might the "Local System Account" login takes SOOOOO MUCH LONGER than when the service logs in with the non-LocalSystemAccount, causing the windows service time-out? what's could the difference be between these two to cause such different behaviour at start up? secondly - taking a step back, all i'm trying to achieve, is simply a windows service that provides a gui for configuration - I'd be quite happy to run using the non-Local System Account (with named user/pwd), if I could get the service to interact with the desktop (that is, have a context menu available from the system tray). is this possible, and if so how? any pointers to the above questions would be very much appreciated! thanks in advance for your help.

    Read the article

  • PHP Magento SOAP-ERROR: Parsing WSDL: Couldn't load from urlpath

    - by dan.codes
    I am trying to create a soap client by passing a url that is hosted on my local machine, my dev environment and I keep getting this error. I use to be able to make this call and it worked just fine. Basically all I am doing is this $client = new SoapClient('http://virtual.website.com:81/api/?wsdl'); If I go to the url in a browser it comes up, so I know it is the right location. On the Magento forums there are some similar posts but I don't know that this is a Magento specific problem. Everything they mention as a solution I already have. They say to edit the hosts file for example 127.0.0.1 website.com I already have this since it is setup as a virtual host. Here is the error in my error_log [Fri Jun 04 12:30:37 2010] [error] [client 127.0.0.1] PHP Fatal error: SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://virtual.website.com:81/api/soap/?wsdl' : XML declaration allowed only at the start of the document\n in /usr/local/sites/virtual.website.com/www/CUSTOMSCRIPTS/removeProductImages.php on line 6 [Fri Jun 04 12:30:37 2010] [error] [client 127.0.0.1] PHP Stack trace: [Fri Jun 04 12:30:37 2010] [error] [client 127.0.0.1] PHP 1. {main}() /usr/local/sites/virtual.website.com/www/CUSTOMSCRIPTS/removeProductImages.php:0 [Fri Jun 04 12:30:37 2010] [error] [client 127.0.0.1] PHP 2. SoapClient->SoapClient(*uninitialized*) /usr/local/sites/virtual.website.com/www/CUSTOMSCRIPTS/removeProductImages.php:6

    Read the article

  • Unable to Git-push master to Github

    - by Masi
    This question is related to my problem in understanding rebase, branch and merge, and to the problem How can you commit to your github account as you have a teamMate in your remote list? I found out that other people have had the same problem. The problem seems to be related to /etc/xinet.d/. Problem: unable to push my local branch to my master branch at Github I run git push origin master I get fatal: 'origin' does not appear to be a git repository fatal: The remote end hung up unexpectedly The error message suggests me that the branch 'origin' is not in my local git repository. This way, Git stops connecting to Github. This is strange, since I have not removed the branch 'origin'. My git tree is dev * master ticgit remotes/Math/Math remotes/Math/master remotes/origin/master remotes/Masi/master How can you push your local branch to Github, while you have a teamMate's branch in your local Git? VonC's answer solves the main problem. I put a passphares to my ssh keys. I run $git push github master I get Permission denied (publickey). fatal: The remote end hung up unexpectedly It seems that I need to give the passphrase for Git somehow. How can you make Git to ask passphares?

    Read the article

  • Python and csv help

    - by user353064
    I'm trying to create this script that will check the computer host name then search a master list for the value to return a corresponding value in the csv file. Then open another file and do a find an replace. I know this should be easy but haven't done so much in python before. Here is what I have so far... masterlist.txt (tab delimited) Name UID Bob-Smith.local bobs Carmen-Jackson.local carmenj David-Kathman.local davidk Jenn-Roberts.local jennr Here is the script that I have created thus far #GET CLIENT HOST NAME import socket host = socket.gethostname() print host #IMPORT MASTER DATA import csv, sys filename = "masterlist.txt" reader = csv.reader(open(filename, "rU")) #PRINT MASTER DATA for row in reader: print row #SEARCH ON HOSTNAME AND RETURN UID #REPLACE VALUE IN FILE WITH UID #import fileinput #for line in fileinput.FileInput("filetoreplace",inplace=1): # line = line.replace("replacethistext","UID") # print line Right now, it's just set to print the master list. I'm not sure if the list needs to be parsed and placed into a dictionary or what. I really need to figure out how to search the first field for the hostname and then return the field in the second column. Thanks in advance for your help, Aaron

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >