Search Results

Search found 9417 results on 377 pages for 'auth module'.

Page 87/377 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Postfix to deliver mail to a virtual address mailbox

    - by Chloe
    Postfix version 2.6.6, Dovecot Version 2.0.9 I want to setup Postfix + Dovecot. Dovecot seems to be working. I can authenticate. However, the mailbox is empty! Nothing will get delivered! I followed many tutorials on Postfix + Dovecot but they seem to want to complicate things by using Dovecot LDA or MySQL. I just want it to be very simple and having Postfix deliver to the virtual mail boxes are fine. I don't need MySQL either. I already set up a custom password file that Dovecot uses for authentication and I can login to POP3 with SSL. I can see from the logs that Postfix is delivering to the system user accounts (the catch-all), instead of the virtual users that I set up in Dovecot. The SMTP + SSL authentication seems to work also. I can also see from the logs that Dovecot is checking the correct virtual mail folder. I just need to figure out how to get Postfix to deliver to the virtual mail boxes. I have the following which I believe are relevant. Let me know what other settings you need to see: alias_maps = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = xxx.com myhostname = mail.xxx.com mynetworks = 99.99.99.99, 99.99.99.99 myorigin = $mydomain relay_domains = $mydestination, xxx.com, domain2.net, domain3.com sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = reject_non_fqdn_sender reject_non_fqdn_recipient reject_unknown_recipient_domain permit_sasl_authenticated check_relay_domains smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = check_sender_mx_access cidr:/etc/postfix/bogus_mx reject_invalid_hostname reject_unknown_sender_domain reject_non_fqdn_sender virtual_mailbox_base = /var/spool/vmail virtual_mailbox_domains = xxx.com, domain2.net, domain3.com virtual_minimum_uid = 444 Postfix master.cf: submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_sasl_type=dovecot -o smtpd_sasl_path=private/auth -o smtpd_sasl_security_options=noanonymous -o smtpd_sasl_local_domain=$myhostname -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_sender_login_maps=hash:/etc/postfix/virtual -o smtpd_sender_restrictions=reject_sender_login_mismatch -o smtpd_recipient_restrictions=reject_non_fqdn_recipient,reject_unknown_recipient_domain,permit_sasl_authenticated,reject Dovecot related: mail_location = maildir:~/Maildir passdb { args = /etc/dovecot/users.conf driver = passwd-file } service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } The virtual mail user: vmail:x:444:99:virtual mail users:/var/spool/vmail:/sbin/nologin Here is the /var/log/maillog when I try to send something to myself: Oct 25 22:10:05 308321 postfix/smtpd[2200]: connect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:05 308321 postfix/smtpd[2200]: D224BD4753: client=user-999.cable.mindspring.com[99.99.99.99], sasl_method=LOGIN, [email protected] Oct 25 22:10:06 308321 postfix/cleanup[2207]: D224BD4753: message-id=<7DC3C163CFFC483AB6226F8D3D9969D2@dumbopc> Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: from=<[email protected]>, size=1385, nrcpt=1 (queue active) Oct 25 22:10:06 308321 postfix/smtpd[2200]: disconnect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:06 308321 postfix/local[2208]: D224BD4753: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=1.1, delays=0.53/0.02/0/0.51, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: removed

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • ISS7 authentication doesn't work on servername

    - by nLL
    Not sure if I put correct title but here is my problem: Local (home network) IIS 7, default web site bind to any IP on port 81, Anonymous Auth disabled, Windows Auth enabled If I go to 192.168.1.101:81/ I get asked for username and password. If I go to server:81/ nothing is being asked regardless whether connecting from local or another machine on the network. Why is that? Am I doing something wrong?

    Read the article

  • apache: bypass authentication for a specific ip

    - by Tevez G
    In Apache I have set basic authentication for /protected location. Now i need to bypass authetication for a specific ip address but keep auth for others intact. Can anyone guide me on this one. Here is my current snippet of auth protected location. Location "/protected" Order allow,deny Allow from all AuthName "Protected folder" AuthType Basic AuthUserFile /etc/htpasswd require valid-user /Location

    Read the article

  • pylucene: install error

    - by Pradeep
    I am trying to install Pylucene (pylucene-3.3-3-src.tar.gz) on my ubuntu linux 11.10. I have python 2.7.2. I was able to compile JCC (I think) because I didnt see any error when I installed it. When I tried to install Pylucene I get the following error. Can someone help? Thanks. ICU not installed /usr/bin/python -m jcc --shared --jar lucene-java-3.3/lucene/build/lucene-core-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/analyzers/common/lucene-analyzers-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/memory/lucene-memory-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/highlighter/lucene-highlighter-3.3.jar --jar build/jar/extensions.jar --jar lucene-java-3.3/lucene/build/contrib/queries/lucene-queries-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/grouping/lucene-grouping-3.3.jar --package java.lang java.lang.System java.lang.Runtime --package java.util java.util.Arrays java.util.HashMap java.util.HashSet java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator --package java.util.regex --package java.io java.io.StringReader java.io.InputStreamReader java.io.FileInputStream --exclude org.apache.lucene.queryParser.Token --exclude org.apache.lucene.queryParser.TokenMgrError --exclude org.apache.lucene.queryParser.QueryParserTokenManager --exclude org.apache.lucene.queryParser.ParseException --exclude org.apache.lucene.search.regex.JakartaRegexpCapabilities --exclude org.apache.regexp.RegexpTunnel --exclude org.apache.lucene.analysis.cn.smart.AnalyzerProfile --python lucene --mapping org.apache.lucene.document.Document 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping java.util.Properties 'getProperty:(Ljava/lang/String;)Ljava/lang/String;' --sequence java.util.AbstractList 'size:()I' 'get:(I)Ljava/lang/Object;' --rename org.apache.lucene.search.highlight.SpanScorer=HighlighterSpanScorer --version 3.3 --module python/collections.py --module python/ICUNormalizer2Filter.py --module python/ICUFoldingFilter.py --module python/ICUTransformFilter.py --files 3 --build /usr/bin/python: No module named jcc make: *** [compile] Error 1 Here is my Makefile configuration which I uncommented PREFIX_PYTHON=/usr ANT=ant PYTHON=$(PREFIX_PYTHON)/bin/python JCC=$(PYTHON) -m jcc --shared NUM_FILES=3

    Read the article

  • WM_CONCAT use CASE

    - by Ruslan
    i have a select: select substr(acc,1,4),currency, amount, module, count(*), wm_concat(trn_ref_no) trn from all_entries where date = to_date ('01012010','DDMMYYYY') group by substr(acc,1,4),currency, amount, module in this case i have error: *ORA-06502: PL/SQL: : character string buffer too small ... "WMSYS.WM_CONCAT_IMPL"* if i change to: select substr(acc,1,4),currency, amount, module, count(), (case when count() < 10 then wm_concat(trn_ref_no) else null end) trn from fcc.acvw_all_ac_entries where trn_dt = to_date ('05052010','DDMMYYYY') group by substr(acc,1,4),currency, amount, module to avoid buffer limit error. But even in this case i have the same error. How can i avoid this error?

    Read the article

  • How do I send automated e-mails from Drupal using Messaging and Notifications?

    - by Adrian
    I am working on a Notifications plugin, and after starting to write my notes down about how to do this, decided to just post them here. Please feel free to come make modifications and changes. Eventually I hope to post this on the Drupal handbook as well. Thanks. --Adrian Sending automated e-mails from Drupal using Messaging and Notifications To implement a notifications plugin, you must implement the following functions: Use hook_messaging, hook_token_list and hook_token_values to create the messages that will be sent. Use hook_notifications to create the subscription types Add code to fire events (eg in hook_nodeapi) Add all UI elements to allow users to subscribe/unsubscribe Understanding Messaging The Messaging module is used to compose messages that can be delivered using various formats, such as simple mail, HTML mail, Twitter updates, etc. These formats are called "send methods." The backend details do not concern us here; what is important are the following concepts: TOKENS: tokens are provided by the "tokens" module. They allow you to write keywords in square brackets, [like-this], that can be replaced by any arbitrary value. Note: the token groups you create must match the keys you add to the $events-objects[$key] array. MESSAGE KEYS: A key is a part of a message, such as the greetings line. Keys can be different for each send method. For example, a plaintext mail's greeting might be "Hi, [user]," while an HTML greeing might be "Hi, [user]," and Twitter's might just be "[user-firstname]: ". Keys can have any arbitrary name. Keys are very simple and only have a machine-readable name and a user-readable description, the latter of which is only seen by admins. MESSAGE GROUPS: A group is a bunch of keys that often, but not always, might be used together to make up a complete message. For example, a generic group might include keys for a greeting, body, closing and footer. Groups can also be "subclassed" by selecting a "fallback" group that will supply any keys that are missing. Groups are also associated with modules; I'm not sure what these are used for. Understanding Notifications The Notifications module revolves around the following concepts: SUBSCRIPTIONS: Notifications plugins may define one or more types of subscriptions. For example, notifications_content defines subscriptions for: Threads (users are notified whenever a node or its comments change) Content types (users are notified whenever a node of a certain type is created or is changed) Users (users are notified whenever another user is changed) Subscriptions refer to both the user who's subscribed, how often they wish to be notified, the send method (for Messaging) and what's being subscribed to. This last part is defined in two steps. Firstly, a plugin defines several "subscription fields" (through a hook_notifications op of the same name), and secondly, "subscription types" (also an op) defines which fields apply to each type of subscription. For example, notifications_content defines the fields "nid," "author" and "type," and the subscriptions "thread" (nid), "nodetype" (type), "author" (author) and "typeauthor" (type and author), the latter referring to something like "any STORY by JOE." Fields are used to link events to subscriptions; an event must match all fields of a subscription (for all normal subscriptions) to be delivered to the recipient. The $subscriptions object is defined in subsequent sections. Notifications prefers that you don't create these objects yourself, preferring you to call the notifications_get_link() function to create a link that users may click on, but you can also use notifications_save_subscription and notifications_delete_subscription to do it yourself. EVENTS: An event is something that users may be notified about. Plugins create the $event object then call notifications_event($event). This either sends out notifications immediately, queues them to send out later, or both. Events include the type of thing that's changed (eg 'node', 'user'), the ID of the thing that's changed (eg $node-nid, $user-uid) and what's happened to it (eg 'create'). These are, respectively, $event-type, $event-oid (object ID) and $event-action. Warning: notifications_content_nodeapi also adds a $event-node field, referring to the node itself and not just $event-oid = $node-nid. This is not used anywhere in the core notifications module; however, when the $event is passed back to the 'query' op (see below), we assume the node is still present. Events do not refer to the user they will be referred to; instead, Notifications makes the connection between subscriptions and events, using the subscriptions' fields. MATCHING EVENTS TO SUBSCRIPTIONS: An event matches a subscription if it has the same type as the event (eg "node") and if the event matches all the correct fields. This second step is determined by the "query" hook op, which is called with the $event object as a parameter. The query op is responsible for giving Notifications a value for all the fields defined by the plugin. For example, notifications_content defines the 'nid', 'type' and 'author' fields, so its query op looks like this (ignore the case where $event_or_user = 'user' for now): $event_or_user = $arg0; $event_type = $arg1; $event_or_object = $arg2; if ($event_or_user == 'event' && $event_type == 'node' && ($node = $event_or_object->node) || $event_or_user == 'user' && $event_type == 'node' && ($node = $event_or_object)) { $query[]['fields'] = array( 'nid' => $node->nid, 'type' => $node->type, 'author' => $node->uid, ); return $query; After extracting the $node from the $event, we set $query[]['fields'] to a dictionary defining, for this event, all the fields defined by the module. As you can tell from the presence of the $query object, there's way more you can do with this op, but they are not covered here. DIGESTING AND DEDUPING: Understanding the relationship between Messaging and Notifications Usually, the name of a message group doesn't matter, but when being used with Notifications, the names must follow very strict patterns. Firstly, they must start with the name "notifications," and then are followed by either "event" or "digest," depending on whether the message group is being used to represent either a single event or a group of events. For 'events,' the third part of the name is the "type," which we get from Notification's $event-type (eg: notifications_content uses 'node'). The last part of the name is the operation being performed, which comes from Notification's $event-action. For example: notifications-event-node-comment might refer to the message group used when someone comments on a node notifications-event-user-update to a user who's updated their profile Hyphens cannot appear anywhere other than to separate the parts of these words. For 'digest' messages, the third and fourth part of the name come from hook_notification's "event types" callback, specifically this line: $types[] = array( 'type' => 'node', 'action' => 'insert', ... 'digest' => array('node', 'type'), ); $types[] = array( 'type' => 'node', 'action' => 'update', ... 'digest' => array('node', 'nid'), ); In this case, the first event type (node insertion) will be digested with the notifications-digest-node-type message template providing the header and footer, likely saying something like "the following [type] was created." The second event type (node update) will be digested with the notifications-digest-node-nid message template. Data Structure and Callback Reference $event The $event object has the following members: $event-type: The type of event. Must match the type in hook_notification::"event types". {notifications_event} $event-action: The action the event describes. Most events are sorted by [$event-type][$event-action]. {notifications_event}. $event-object[$object_type]: All objects relevant to the event. For example, $event-object['node'] might be the node that the event describes. $object_type can come from the 'event types' hook (see below). The main purpose appears to be to be passed to token_replace_multiple as the second parameter. $event-object[$event-type] is assumed to exist in the short digest processing functions, but this doesn't appear to be used anywhere. Not saved in the database; loaded by hook_notifications::"event load" $event-oid: apparently unused. The id of the primary object relevant to this event (eg the node's nid). $event-module: apparently unused $event-params[$key]: Mainly a place for plugins to save random data. The main module will serialize the contents of this array but does not use it in any way. However, notifications_ui appears to do something weird with it, possibly by using subscriptions' fields as keys into this array. I'm not sure why though. hook_notifications op 'subscription types': returns an array of subscription types provided by the plugin, in the form $key = array(...) with the following members: event_type: this subscription can only match events whose $event-type has this value. Stored in the database as notifications.event_type for every individual subscription. Apparently, this can be overiden in code but I wouldn't try it (see notifications_save_subscription). fields: an unkeyed array of fields that must be matched by an event (in addition to the event_type) for it to match this subscription. Each element of this array must be a key of the array returned by op 'subscription fields' which in turn must be used by op 'query' to actually perform the matching. title: user-readable title for their subscriptions page (eg the 'type' column in user/%uid/notifications/subscriptions) description: a user-readable description. page callback: used to add a supplementary page at user/%uid/notifications/blah. This and the following are used by notifications_ui as a part of hook_menu_alter. Appears to be partially deprecated. user page: user/%uid/notifications/blah. op 'event types': returns an array of event types, with each event type being an array with the following members: type: this will match $event-type action: this will match $event-action digest: an array with two ordered (non-keyed) elements, "type" and "field." 'type' is used as an index into $event-objects. 'field' is also used to group events like so: $event-objects[$type]-$field. For example, 'field' might be 'nid' - if the object is a node, the digest lines will be grouped by node ID. Finally, both are used to find the correct Messaging template; see discussion above. description: used on the admin "Notifications-Events" page name: unused, use Messaging instead line: deprecated, use Messaging instead Other Stuff This is an example of the main query that inserts an event into the queue: INSERT INTO {notifications_queue} (uid, destination, sid, module, eid, send_interval, send_method, cron, created, conditions) SELECT DISTINCT s.uid, s.destination, s.sid, s.module, %d, // event ID s.send_interval, s.send_method, s.cron, %d, // time of the event s.conditions FROM {notifications} s INNER JOIN {notifications_fields} f ON s.sid = f.sid WHERE (s.status = 1) AND (s.event_type = '%s') // subscription type AND (s.send_interval >= 0) AND (s.uid <> %d) AND ( (f.field = '%s' AND f.intval IN (%d)) // everything from 'query' op OR (f.field = '%s' AND f.intval = %d) OR (f.field = '%s' AND f.value = '%s') OR (f.field = '%s' AND f.intval = %d)) GROUP BY s.uid, s.destination, s.sid, s.module, s.send_interval, s.send_method, s.cron, s.conditions HAVING s.conditions = count(f.sid)

    Read the article

  • Can't install ErlyBird on Netbeans 6.8

    - by Jonas
    I would like to use ErlyBird, so I downloaded the latest Netbeans (6.8), and then downloaded and unzipped ErlyBird. After that I imported the plugins in Netbeans, but during installation I was prompted with: Warning - could not install some modules: Erlang Project - The module named org.netbeans.modules.parsing.api was needed and not found. Erlang Editor - The module named org.netbeans.modules.parsing.api was needed and not found. Another module could not be installed due to the above problems. Is there a workaround to this?

    Read the article

  • NSURLConnection and Basic HTTP Authentication

    - by Justin Galzic
    I need to invoke an initial GET HTTP request with Basic Authentication. This would be the first time the request is sent to the server and I already have the username & password so there's no need for a challenge from the server for authorization. First question: 1) Does NSUrlConnection have to be set as synchronous to do Basic Auth? According to the answer on this post, it seems that you can't do Basic Auth if you opt for the async route. 2) Anyone know of any some sample code that illustrates Basic Auth on a GET request without the need for a challenge response? Apple's documentation shows an example but only after the server has issued the challenge request to the client. I'm kind of new the networking portion of the SDK and I'm not sure which of the other classes I should use to get this working. (I see the NSURLCredential class but it seems that it is used only with NSURLAuthenticationChallenge after the client has requested for an authorized resource from the server).

    Read the article

  • exe created by py2exe give error

    - by user283405
    i have created an exe from py2exe. After successfully creating the exe, i got the following error when i run main.exe. File "_mssql.pyc", line 12, in <module> File "_mssql.pyc", line 10, in __load ImportError: DLL load failed: The specified module could not be found. I am using pymssql module for sql server.

    Read the article

  • pyscripter Rpyc error

    - by jf328
    pyscripter 2.5.3.0 x64, python 2.7.7 anaconda 2.0.1, windows 7 I was using pyscripter and EPD python happily in 32 bit, no problem. Just changed to 64 bit anaconda version and re-installed everything but now pyscripter cannot import rpyc -- it runs with internal engine (no anaconda), but no such error in pure python. Thanks very much! btw, there is a similar SO post few years ago, but the answer there does not work. *** Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32. *** Internal Python engine is active *** *** Internal Python engine is active *** >>> import rpyc Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\Anaconda\lib\site-packages\rpyc\__init__.py", line 44, in <module> from rpyc.core import (SocketStream, TunneledSocketStream, PipeStream, Channel, File "C:\Anaconda\lib\site-packages\rpyc\core\__init__.py", line 1, in <module> from rpyc.core.stream import SocketStream, TunneledSocketStream, PipeStream File "C:\Anaconda\lib\site-packages\rpyc\core\stream.py", line 7, in <module> import socket File "C:\Anaconda\Lib\socket.py", line 47, in <module> import _socket ImportError: DLL load failed: The specified procedure could not be found. >>> C:\research>python Python 2.7.7 |Anaconda 2.0.1 (64-bit)| (default, Jun 11 2014, 10:40:02) [MSC v.1500 64bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://binstar.org >>> import rpyc >>>

    Read the article

  • problem bex error

    - by Vaibhav Gandhi
    Problem signature: Problem Event Name: BEX Application Name: iexplore.exe Application Version: 8.0.6001.18882 Application Timestamp: 4b3ed243 Fault Module Name: msjava.dll Fault Module Version: 5.0.2752.0 Fault Module Timestamp: 35747274 Exception Offset: 000c5a6c Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.0.6001.2.1.0.768.3 Locale ID: 1033 Additional Information 1: fd00 Additional Information 2: ea6f5fe8924aaa756324d57f87834160 Additional Information 3: fd00 Additional Information 4: ea6f5fe8924aaa756324d57f87834160

    Read the article

  • Using SQL Alchemy and pyodbc with IronPython 2.6.1

    - by beargle
    I'm using IronPython and the clr module to retrieve SQL Server information via SMO. I'd like to retrieve/store this data in a SQL Server database using SQL Alchemy, but am having some trouble loading the pyodbc module. Here's the setup: IronPython 2.6.1 (installed at D:\Program Files\IronPython) CPython 2.6.5 (installed at D:\Python26) SQL Alchemy 0.6.1 (installed at D:\Python26\Lib\site-packages\sqlalchemy) pyodbc 2.1.7 (installed at D:\Python26\Lib\site-packages) I have these entries in the IronPython site.py to import CPython standard and third-party libraries: # Add CPython standard libs and DLLs import sys sys.path.append(r"D:\Python26\Lib") sys.path.append(r"D:\Python26\DLLs") sys.path.append(r"D:\Python26\lib-tk") sys.path.append(r"D:\Python26") # Add CPython third-party libs sys.path.append(r"D:\Python26\Lib\site-packages") # sqlite3 sys.path.append(r"D:\Python26\Lib\sqlite3") # Add SQL Server SMO sys.path.append(r"D:\Program Files\Microsoft SQL Server\100\SDK\Assemblies") import clr clr.AddReferenceToFile('Microsoft.SqlServer.Smo.dll') clr.AddReferenceToFile('Microsoft.SqlServer.SqlEnum.dll') clr.AddReferenceToFile('Microsoft.SqlServer.ConnectionInfo.dll') SQL Alchemy imports OK in IronPython, put I receive this error message when trying to connect to SQL Server: IronPython 2.6.1 (2.6.10920.0) on .NET 2.0.50727.3607 Type "help", "copyright", "credits" or "license" for more information. >>> import sqlalchemy >>> e = sqlalchemy.MetaData("mssql://") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\Python26\Lib\site-packages\sqlalchemy\schema.py", line 1780, in __init__ File "D:\Python26\Lib\site-packages\sqlalchemy\schema.py", line 1828, in _bind_to File "D:\Python26\Lib\site-packages\sqlalchemy\engine\__init__.py", line 241, in create_engine File "D:\Python26\Lib\site-packages\sqlalchemy\engine\strategies.py", line 60, in create File "D:\Python26\Lib\site-packages\sqlalchemy\connectors\pyodbc.py", line 29, in dbapi ImportError: No module named pyodbc This code works just fine in CPython, but it looks like the pyodbc module isn't accessible from IronPython. Any suggestions? I realize that this may not be the best way to approach the problem, so I'm open to tackling this a different way. Just wanted to get some experience with using SQL Alchemy and pyodbc.

    Read the article

  • Using Tweepy API behind proxy

    - by user1505819
    I have a using Tweepy, a python wrapper for Twitter.I am writing a small GUI application in Python which updates my twitter account. Currently, I am just testing if the I can get connected to Twitter, hence used test() call. I am behind Squid Proxy server.What changes should I make to snippet so that I should get my work done. Setting http_proxy in bash shell did not help me. def printTweet(self): #extract tweet string tweet_str = str(self.ui.tweet_txt.toPlainText()) ; #tweet string extracted. self.ui.tweet_txt.clear() ; self.tweet_on_twitter(str); def tweet_on_twitter(self,my_tweet) : auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET); auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) ; api = tweepy.API(auth) ; if api.test() : print 'Test successful' ; else : print 'Test unsuccessful';

    Read the article

  • Send an email using python script

    - by nimmyliji
    Hi, Today I needed to send email from a Python script. As always I searched Google and found the following script that fits to my need. import smtplib SERVER = "localhost" FROM = "[email protected]" TO = ["[email protected]"] # must be a list SUBJECT = "Hello!" TEXT = "This message was sent with Python's smtplib." # Prepare actual message message = """\ From: %s To: %s Subject: %s %s """ % (FROM, ", ".join(TO), SUBJECT, TEXT) # Send the mail server = smtplib.SMTP(SERVER) server.sendmail(FROM, TO, message) server.quit() But when I tried to run the program, I got the following error message: Traceback (most recent call last): File "C:/Python26/email.py", line 1, in <module> import smtplib File "C:\Python26\lib\smtplib.py", line 46, in <module> import email.utils File "C:/Python26/email.py", line 24, in <module> server = smtplib.SMTP(SERVER) AttributeError: 'module' object has no attribute 'SMTP' How can i solve this problem? Any one can help me? Thanks in advance, Nimmy.

    Read the article

  • google-app-engine deploy error..

    - by zjm1126
    2010-04-20 15:33:39,421 WARNING appengine_rpc.py:399 ssl module not found. Without the ssl module, the identity of the remote host cannot be verified, and connections may NOT be secure. To fix this, please install the ssl module from http://pypi.python.org/pypi/ssl . To learn more, see http://code.google.com/appengine/kb/general.html#rpcssl . how can i do ? thanks

    Read the article

  • How to get roles with JSR 196 authentification in GlassFish?

    - by deamon
    I want to use a custom authentication module conforming to JSR 196 in GlassFish 3. The interface javax.security.auth.message.ServerAuth has the method: AuthStatus validateRequest( MessageInfo messageInfo, javax.security.auth.Subject clientSubject, javax.security.auth.Subject serviceSubject ) AuthStatus can be one of several constants like FAILURE or SUCCESS. The question is: How can I get the roles from a "role datebase" with JSR 196? Example: The server receives a request with a SSO token (CAS token for example), checks whether the token is valid, populates the remote user object with roles fetches from a database via JDBC or from REST service via http. Is the role fetching in the scope of JSR 196? How could that be implemented? Do I have to use JSR 196 together with JSR 115 to use custom authentication and a custom role source?

    Read the article

  • Why is Assembly.GetCustomAttributes suddenly throwing TypeLoadException on build machine with Silver

    - by andrej351
    A short while back i had to display the current version of our Silverlight app. After some googling the following code gave me the desired result: var fileVersionAttributes = typeof(MyClass).Assembly. GetCustomAttributes(typeof(AssemblyFileVersionAttribute), false) as AssemblyFileVersionAttribute[]; var version = fileVersionAttributes[0].Version; This worked a treat in our .NET 3.5 Silverlight 3 environment. However, we recently upgraded to .NET 4 and Silverlight 4. We just finished getting our build machine working and found that the unit test for this code was throwing the following exception: Exception Message: System.TypeLoadException: Error 0x80131522. Debugging resource strings are unavailable. See http://go.microsoft.com/fwlink/?linkid=106663&Version=3.0.50106.0&File=mscorrc.dll&Key=0x80131522 at System.ModuleHandle.ResolveType(ModuleHandle module, Int32 typeToken, RuntimeTypeHandle* typeInstArgs, Int32 typeInstCount, RuntimeTypeHandle* methodInstArgs, Int32 methodInstCount) at System.ModuleHandle.ResolveTypeHandle(Int32 typeToken, RuntimeTypeHandle[] typeInstantiationContext, RuntimeTypeHandle[] methodInstantiationContext) at System.Reflection.Module.ResolveType(Int32 metadataToken, Type[] genericTypeArguments, Type[] genericMethodArguments) at System.Reflection.CustomAttribute.FilterCustomAttributeRecord(CustomAttributeRecord caRecord, MetadataImport scope, Assembly& lastAptcaOkAssembly, Module decoratedModule, MetadataToken decoratedToken, RuntimeType attributeFilterType, Boolean mustBeInheritable, Object[] attributes, IList derivedAttributes, RuntimeType& attributeType, RuntimeMethodHandle& ctor, Boolean& ctorHasParameters, Boolean& isVarArg) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean mustBeInheritable, IList derivedAttributes, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Assembly assembly, Type caType) at System.Reflection.Assembly.GetCustomAttributes(Type attributeType, Boolean inherit) at MyCode.VersionTest() I have never seen this exception before and the link in it points nowhere. It is only throwing on the build machine and not on my development box, so i'm going through a process of trial and error to see any differences between the two. Any idea why this might be happening?? Cheers, Andrej.

    Read the article

  • Blowery HttpCompress with ASP.NET MVC

    - by Iceman
    I'm using the httpcompression module from blowery. It works great with asp.net mvc except on the root url, www.samplesite.com/. On all others it's great, www.samplesite.com/Countries for example. Is anyone here using this module or any other compression module with asp.net mvc ?

    Read the article

  • Rails. How to extend controller class from plugin without any modification in controller file?

    - by potapuff
    I'm use Rails 2.2.2. Rails manual said, the way to extend controller from plug-in is: Plugin: module Plug def self.included(base) base.extend ClassMethods base.send :include, InstanceMethods base.helper JumpLinksHelper end module InstanceMethods def new_controller_metod ... end end module ClassMethods end end app/controller/name_controller.rb class NameController < ApplicationController include Plug ... end Question: is any way to extend controller from plug-in, without any modification of controller file, if we know controller name.

    Read the article

  • API Wrapper Architecture Best Practice

    - by Adam Taylor
    Hi, So I'm writing a Perl wrapper module around a REST webservice and I'm hoping to have some advice on how best to architect the module. I've been looking at a couple of different Perl modules for inspiration. Flickr::Simple2 - so this is basically one big file with methods wrapping around the different methods in the Flickr API, e.g. getPhotos() etc. Flickr::API - this is a sub-class of another module (LWP) for making HTTP requests. So basically it just allows you to make calls through the module, using LWP, that go to the correct API method/URL without defining any wrapper methods itself. (That's explained pretty poorly - but basically it has a method that takes an argument (a API method name) and constructs the correct API call). e.g request() / response(). An alternative design would be like the first described, but less monolithic, with separate classes for separate "areas" of the API. I'd like to follow modern/best practice Perl methods so I'm using Dist::Zilla to build the module and Moose for the OO stuff but I'd appreciate some input on how to actually design/architect my wrapper. Guides/tutorials or pointers to other well designed modules would be appreciated. Cheers

    Read the article

  • Getting traceback from Python C API

    - by TheObserver
    I have a Python C API extension module which occassionally falls over with an uninformative "MemoryError". It's clearly not an error that's catered for by the module's exception handlers. How do I get a more informative error traceback so I can figure out what's gone wrong in the extension module?

    Read the article

  • How do you work out the IIS Virtual Path for an application?

    - by joshcomley
    When I try to change the ASP.NET version to v4 on IIS 6, I receive the following warning: Changing the Framework version requires a restart of the W3SVC service. Alternatively, you can change the Framework version without restarting the W3SVC service by running: aspnet_regiis.exe -norestart -s IIS-Viirtual-Path Do you want to continue (this will change the Framework version and restart the W3SVC service)? How do I work out IIS-Virtual-Path? I have tried the obvious paths i.e.: aspnet_regiis.exe -norestart -s "/WebSites/Extranet/AppName" Where WebSites is the name of the folder in IIS, Extranet the name of the root app and AppName the name of the Virtual Directory application I am trying to change. Thanks! Edit: How do I work out the virtual path for the Auth virtual directory in following IIS6 setup: I have tried: aspnet_regiis.exe -norestart -s "/Web Sites/Extranet/Auth" aspnet_regiis.exe -norestart -s "Auth" I get: Installation stopped because the specified path (WhateverIPutIn) is invalid.

    Read the article

  • Doxygen groups and modules index

    - by cppdev
    Hi I am creating a doxygen document for my project. Recently, I have grouped related classes using \addtogroup tag. After this, I have got a module tab in my documentation. It shows all modules. I want to add some description right below module name below the module name on the same page. How can I do it using doxygen ? Here's my tag /*! \addtogroup test test * Test Testing a group in doxygen * @{ */

    Read the article

  • Codeigniter Twitter Library : What is my Callback URL. Where can i find it?

    - by Tapha
    I would like to know where i can find my callback url in ci? Im quite new to it so not really sure. Here is the lib im using. <?php class Home extends Controller { function Home() { parent::Controller(); } public function index() { // This is how we do a basic auth: // $this->twitter->auth('user', 'password'); // Fill in your twitter oauth client keys here $consumer_key = ''; $consumer_key_secret = ''; // For this example, we're going to get and save our access_token and access_token_secret // in session data, but you might want to use a database instead :) $this->load->library('session'); $tokens['access_token'] = NULL; $tokens['access_token_secret'] = NULL; // GET THE ACCESS TOKENS $oauth_tokens = $this->session->userdata('twitter_oauth_tokens'); if ( $oauth_tokens !== FALSE ) $tokens = $oauth_tokens; $this->load->library('twitter'); $auth = $this->twitter->oauth($consumer_key, $consumer_key_secret, $tokens['access_token'], $tokens['access_token_secret']); if ( isset($auth['access_token']) && isset($auth['access_token_secret']) ) { // SAVE THE ACCESS TOKENS $this->session->set_userdata('twitter_oauth_tokens', $auth); if ( isset($_GET['oauth_token']) ) { $uri = $_SERVER['REQUEST_URI']; $parts = explode('?', $uri); // Now we redirect the user since we've saved their stuff! header('Location: '.$parts[0]); return; } } // This is where you can call a method. $this->twitter->call('statuses/update', array('status' => 'Testing CI Twitter oAuth sexyness by @elliothaughin')); // Here's the calls you can make now. // Sexy! /* $this->twitter->call('statuses/friends_timeline'); $this->twitter->search('search', array('q' => 'elliot')); $this->twitter->search('trends'); $this->twitter->search('trends/current'); $this->twitter->search('trends/daily'); $this->twitter->search('trends/weekly'); $this->twitter->call('statuses/public_timeline'); $this->twitter->call('statuses/friends_timeline'); $this->twitter->call('statuses/user_timeline'); $this->twitter->call('statuses/show', array('id' => 1234)); $this->twitter->call('direct_messages'); $this->twitter->call('statuses/update', array('status' => 'If this tweet appears, oAuth is working!')); $this->twitter->call('statuses/destroy', array('id' => 1234)); $this->twitter->call('users/show', array('id' => 'elliothaughin')); $this->twitter->call('statuses/friends', array('id' => 'elliothaughin')); $this->twitter->call('statuses/followers', array('id' => 'elliothaughin')); $this->twitter->call('direct_messages'); $this->twitter->call('direct_messages/sent'); $this->twitter->call('direct_messages/new', array('user' => 'jamierumbelow', 'text' => 'This is a library test. Ignore')); $this->twitter->call('direct_messages/destroy', array('id' => 123)); $this->twitter->call('friendships/create', array('id' => 'elliothaughin')); $this->twitter->call('friendships/destroy', array('id' => 123)); $this->twitter->call('friendships/exists', array('user_a' => 'elliothaughin', 'user_b' => 'jamierumbelow')); $this->twitter->call('account/verify_credentials'); $this->twitter->call('account/rate_limit_status'); $this->twitter->call('account/rate_limit_status'); $this->twitter->call('account/update_delivery_device', array('device' => 'none')); $this->twitter->call('account/update_profile_colors', array('profile_text_color' => '666666')); $this->twitter->call('help/test'); */ } } /* End of file welcome.php */ /* Location: ./system/application/controllers/home.php */ Thank you all

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >