Search Results

Search found 6461 results on 259 pages for 'hosted exchange'.

Page 237/259 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Why does Celery work in Python shell, but not in my Django views? (import problem)

    - by TIMEX
    I installed Celery (latest stable version.) I have a directory called /home/myuser/fable/jobs. Inside this directory, I have a file called tasks.py: from celery.decorators import task from celery.task import Task class Submitter(Task): def run(self, post, **kwargs): return "Yes, it works!!!!!!" Inside this directory, I also have a file called celeryconfig.py: BROKER_HOST = "localhost" BROKER_PORT = 5672 BROKER_USER = "abc" BROKER_PASSWORD = "xyz" BROKER_VHOST = "fablemq" CELERY_RESULT_BACKEND = "amqp" CELERY_IMPORTS = ("tasks", ) In my /etc/profile, I have these set as my PYTHONPATH: PYTHONPATH=/home/myuser/fable:/home/myuser/fable/jobs So I run my Celery worker using the console ($ celeryd --loglevel=INFO), and I try it out. I open the Python console and import the tasks. Then, I run the Submitter. >>> import fable.jobs.tasks as tasks >>> s = tasks.Submitter() >>> s.delay("abc") <AsyncResult: d70d9732-fb07-4cca-82be-d7912124a987> Everything works, as you can see in my console [2011-01-09 17:30:05,766: INFO/MainProcess] Task tasks.Submitter[d70d9732-fb07-4cca-82be-d7912124a987] succeeded in 0.0398268699646s: But when I go into my Django's views.py and run the exact 3 lines of code as above, I get this: [2011-01-09 17:25:20,298: ERROR/MainProcess] Unknown task ignored: "Task of kind 'fable.jobs.tasks.Submitter' is not registered, please make sure it's imported.": {'retries': 0, 'task': 'fable.jobs.tasks.Submitter', 'args': ('abc',), 'expires': None, 'eta': None, 'kwargs': {}, 'id': 'eb5c65b4-f352-45c6-96f1-05d3a5329d53'} Traceback (most recent call last): File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/listener.py", line 321, in receive_message eventer=self.event_dispatcher) File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/job.py", line 299, in from_message eta=eta, expires=expires) File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/job.py", line 243, in __init__ self.task = tasks[self.task_name] File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/registry.py", line 63, in __getitem__ raise self.NotRegistered(str(exc)) NotRegistered: "Task of kind 'fable.jobs.tasks.Submitter' is not registered, please make sure it's imported." It's weird, because the celeryd client does show that it's registered, when I launch it. [2011-01-09 17:38:27,446: WARNING/MainProcess] Configuration -> . broker -> amqp://GOGOme@localhost:5672/fablemq . queues -> . celery -> exchange:celery (direct) binding:celery . concurrency -> 1 . loader -> celery.loaders.default.Loader . logfile -> [stderr]@INFO . events -> OFF . beat -> OFF . tasks -> . tasks.Decayer . tasks.Submitter Can someone help?

    Read the article

  • Asp.net MVC and MOSS 2010 integration

    - by Robert Koritnik
    Just a sidenote: I'm not sure whether I should post this to serverfault as well, because some MOSS admin may have some info for me as well? A bit of explanation first (without Asp.net MVC) Is it possible to integrate the two? Is it possible to write an application that would share at least credential information with MOSS? I have to write a MOSS application that has to do with these technologies: MOSS 2010 Personal client certificates authentication (most probably on USB keys) Active Directory Federation Services Separate SQL DB that would serve application specific data (separate as not being part of MOSS DB) How should it work? Users should authenticate using personal certificates into MOSS 2010 There would be a certain part of MOSS that would be related to my custom application This application should only authorize certain users via AD FS - I guess these users should have a certain security claim attached to them This application should manage users (that have access to this app) with additional (app specific) security claims related to this application (as additional application level authorization rights for individual application parts) This application should use custom SQL 2008 DB heavily with its own data This application should have the possibility to integrate with external systems as well (Exchange for instance to inject calendar entries, ERP systems etc) This application should be able to export its data (from its DB) to files. I don't know if it's possible, but it would be nice if the app could add these files to MOSS and attach authorization info to them so only users with sufficient rights would be able to view/open these files. Why Asp.net MVC then? I'm very well versed in Asp.net MVC (also with the latest version) and I haven't done anything on Sharepoint since version 2003 (which doesn't do me no good or prepare me for the latest version in any way shape or form). This project will most probably be a death march project so I would rather write my application as a UI rich Asp.net MVC application and somehow integrate it into MOSS. But not only via a link, because I would like to at least share credentials, so users wouldn't need to re-login when accessing my app. Using Asp.net MVC I would at least have the possibility to finish on time or be less death marching. Is this at all possible? Questions Is it possible to integrate Asp.net MVC into MOSS as described above? If integration is not possible, would it be possible to create a completely MOSS based application that would work as described? Which parts of MOSS 2010 should I use to accomplish what I need?

    Read the article

  • Correlated SQL Join Query from multiple tables

    - by SooDesuNe
    I have two tables like the ones below. I need to find what exchangeRate was in effect at the dateOfPurchase. I've tried some correlated sub queries, but I'm having difficulty getting the correlated record to be used in the sub queries. I expect a solution will need to follow this basic outline: SELECT only the exchangeRates for the applicable countryCode From 1. SELECT the newest exchangeRate less than the dateOfPurchase Fill in the query table with all the fields from 2. and the purchasesTable. My Tables: purchasesTable: > dateOfPurchase | costOfPurchase | countryOfPurchase > 29-March-2010 | 20.00 | EUR > 29-March-2010 | 3000 | JPN > 30-March-2010 | 50.00 | EUR > 30-March-2010 | 3000 | JPN > 30-March-2010 | 2000 | JPN > 31-March-2010 | 100.00 | EUR > 31-March-2010 | 125.00 | EUR > 31-March-2010 | 2000 | JPN > 31-March-2010 | 2400 | JPN costOfPurchase is in whatever the local currency is for a given countryCode exchangeRateTable > effectiveDate | countryCode | exchangeRate > 29-March-2010 | JPN | 90 > 29-March-2010 | EUR | 1.75 > 30-March-2010 | JPN | 92 > 31-March-2010 | JPN | 91 The results of the query that I'm looking for: > dateOfPurchase | costOfPurchase | countryOfPurchase | exchangeRate > 29-March-2010 | 20.00 | EUR | 1.75 > 29-March-2010 | 3000 | JPN | 90 > 30-March-2010 | 50.00 | EUR | 1.75 > 30-March-2010 | 3000 | JPN | 92 > 30-March-2010 | 2000 | JPN | 92 > 31-March-2010 | 100.00 | EUR | 1.75 > 31-March-2010 | 125.00 | EUR | 1.75 > 31-March-2010 | 2000 | JPN | 91 > 31-March-2010 | 2400 | JPN | 91 So for example in the results, the exchange rate, in effect for EUR on 31-March was 1.75. I'm using Access, but a MySQL answer would be fine too. UPDATE: Modification to Allan's answer: SELECT dateOfPurchase, costOfPurchase, countryOfPurchase, exchangeRate FROM purchasesTable p LEFT OUTER JOIN (SELECT e1.exchangeRate, e1.countryCode, e1.effectiveDate, min(e2.effectiveDate) AS enddate FROM exchangeRateTable e1 LEFT OUTER JOIN exchangeRateTable e2 ON e1.effectiveDate < e2.effectiveDate AND e1.countryCode = e2.countryCode GROUP BY e1.exchangeRate, e1.countryCode, e1.effectiveDate) e ON p.dateOfPurchase >= e.effectiveDate AND (p.dateOfPurchase < e.enddate OR e.enddate is null) AND p.countryOfPurchase = e.countryCode I had to make a couple small changes.

    Read the article

  • not sure if this is my mistake ~~ vs2010 Add Service Reference fails

    - by gerryLowry
    https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter the above is from Taleo's API guide. I'm trying to create a WCF Client (e.g.: " Creating Your First WCF Client" http://channel9.msdn.com/shows/Endpoint/Endpoint-Screencasts-Creating-Your-First-WCF-Client/ ) The tbe.taleo... link is from Taleo's API documentation. Likely my understanding is flawed. My assumption is that when the link from Taleo is entered into the vs2010 "Add Service Reference" dialog and GO is clicked, then vs2010 should retrieve a proper WSDL/SOAP envelope back from the Taleo link. That does not happen; instead an error occurs. Fiddler2 (http://fiddler2.com) displays the status code 500 "HTTP/1.1 500 Internal Server Error". [FULL DETAILS BELOW] "WcfTestClient.exe" gives a similar error: [WcfTestClient DETAILS BELOW] QUESTION: is it me, or is the Taleo link flawed? Thank you, Gerry [FULL DETAILS "Add Service Reference"] The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: ' SOAP-ENV:Protocol Unsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml". /MANAGER/dispatcher/servlet/rpcrouter '. The remote server returned an error: (500) Internal Server Error. If the service is defined in the current solution, try building the solution and adding the service reference again. [WcfTestClient DETAILS] Error: Cannot obtain Metadata from https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: 'SOAP-ENV:ProtocolUnsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml"./MANAGER/dispatcher/servlet/rpcrouter'. The remote server returned an error: (500) Internal Server Error.HTTP GET Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter The HTML document does not contain Web service discovery information.

    Read the article

  • Why do I get a AssociationTypeMismatch when creating my model object?

    - by Maxm007
    Hi I get the following error: ActiveRecord::AssociationTypeMismatch in ContractsController#create ExchangeRate(#2183081860) expected, got HashWithIndifferentAccess(#2159586480) Params: {"commit"=>"Create", "authenticity_token"=>"g2/Vm2pTcDGk6uRas+aTgpiQiGDY8lsc3UoL8iE+7+E=", "contract"=>{"side"=>"BUY", "currency_id"=>"488525179", "amount"=>"1000", "user_id"=>"633107804", "exchange_rate"=>{"rate"=>"1.7"}}} My relevant model is : class Contract < ActiveRecord::Base belongs_to :currency belongs_to :user has_one :exchange_rate has_many :trades accepts_nested_attributes_for :exchange_rate end class ExchangeRate < ActiveRecord::Base belongs_to :denccy, :class_name=>"Currency" belongs_to :numccy, :class_name=>"Currency" belongs_to :contract end My view is: <% form_for @contract do |contractForm| %> Username: <%= contractForm.collection_select(:user_id, User.all, :id, :username) %> <br> B/S: <%= contractForm.select(:side,options_for_select([['BUY', 'BUY'], ['SELL', 'SELL']], 'BUY')) %> <br> Currency: <%= contractForm.collection_select(:currency_id, Currency.all, :id, :ccy) %> <br> <br> Amount: <%= contractForm.text_field :amount %> <br> <% contractForm.fields_for @contract.exchange_rate do |rateForm|%> Rate: <%= rateForm.text_field :rate %> <br> <% end %> <%= submit_tag :Create %> <% end %> My View Controller: class ContractsController < ApplicationController def new @contract = Contract.new @contract.build_exchange_rate respond_to do |format| format.html # new.html.erb format.xml { render :xml => @contract } end end def create @contract = Contract.new(params[:contract]) respond_to do |format| if @contract.save flash[:notice] = 'Contract was successfully created.' format.html { redirect_to(@contract) } format.xml { render :xml => @contract, :status => :created, :location => @contract } else format.html { render :action => "new" } format.xml { render :xml => @contract.errors, :status => :unprocessable_entity } end end end I'm not sure why it's not recognizing the exchange rate attributes? Thank you

    Read the article

  • Understanding CLR 2.0 Memory Model

    - by Eloff
    Joe Duffy, gives 6 rules that describe the CLR 2.0+ memory model (it's actual implementation, not any ECMA standard) I'm writing down my attempt at figuring this out, mostly as a way of rubber ducking, but if I make a mistake in my logic, at least someone here will be able to catch it before it causes me grief. Rule 1: Data dependence among loads and stores is never violated. Rule 2: All stores have release semantics, i.e. no load or store may move after one. Rule 3: All volatile loads are acquire, i.e. no load or store may move before one. Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.). Rule 5: Loads and stores to the heap may never be introduced. Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location. I'm attempting to understand these rules. x = y y = 0 // Cannot move before the previous line according to Rule 1. x = y z = 0 // equates to this sequence of loads and stores before possible re-ordering load y store x load 0 store z Looking at this, it appears that the load 0 can be moved up to before load y, but the stores may not be re-ordered at all. Therefore, if a thread sees z == 0, then it also will see x == y. If y was volatile, then load 0 could not move before load y, otherwise it may. Volatile stores don't seem to have any special properties, no stores can be re-ordered with respect to each other (which is a very strong guarantee!) Full barriers are like a line in the sand which loads and stores can not be moved over. No idea what rule 5 means. I guess rule 6 means if you do: x = y x = z Then it is possible for the CLR to delete both the load to y and the first store to x. x = y z = y // equates to this sequence of loads and stores before possible re-ordering load y store x load y store z // could be re-ordered like this load y load y store x store z // rule 6 applied means this is possible? load y store x // but don't pop y from stack (or first duplicate item on top of stack) store z What if y was volatile? I don't see anything in the rules that prohibits the above optimization from being carried out. This does not violate double-checked locking, because the lock() between the two identical conditions prevents the loads from being moved into adjacent positions, and according to rule 6, that's the only time they can be eliminated. So I think I understand all but rule 5, here. Anyone want to enlighten me (or correct me or add something to any of the above?)

    Read the article

  • Performance Optimization for Matrix Rotation

    - by Summer_More_More_Tea
    Hello everyone: I'm now trapped by a performance optimization lab in the book "Computer System from a Programmer's Perspective" described as following: In a N*N matrix M, where N is multiple of 32, the rotate operation can be represented as: Transpose: interchange elements M(i,j) and M(j,i) Exchange rows: Row i is exchanged with row N-1-i A example for matrix rotation(N is 3 instead of 32 for simplicity): ------- ------- |1|2|3| |3|6|9| ------- ------- |4|5|6| after rotate is |2|5|8| ------- ------- |7|8|9| |1|4|7| ------- ------- A naive implementation is: #define RIDX(i,j,n) ((i)*(n)+(j)) void naive_rotate(int dim, pixel *src, pixel *dst) { int i, j; for (i = 0; i < dim; i++) for (j = 0; j < dim; j++) dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } I come up with an idea by inner-loop-unroll. The result is: Code Version Speed Up original 1x unrolled by 2 1.33x unrolled by 4 1.33x unrolled by 8 1.55x unrolled by 16 1.67x unrolled by 32 1.61x I also get a code snippet from pastebin.com that seems can solve this problem: void rotate(int dim, pixel *src, pixel *dst) { int stride = 32; int count = dim >> 5; src += dim - 1; int a1 = count; do { int a2 = dim; do { int a3 = stride; do { *dst++ = *src; src += dim; } while(--a3); src -= dim * stride + 1; dst += dim - stride; } while(--a2); src += dim * (stride + 1); dst -= dim * dim - stride; } while(--a1); } After carefully read the code, I think main idea of this solution is treat 32 rows as a data zone, and perform the rotating operation respectively. Speed up of this version is 1.85x, overwhelming all the loop-unroll version. Here are the questions: In the inner-loop-unroll version, why does increment slow down if the unrolling factor increase, especially change the unrolling factor from 8 to 16, which does not effect the same when switch from 4 to 8? Does the result have some relationship with depth of the CPU pipeline? If the answer is yes, could the degrade of increment reflect pipeline length? What is the probable reason for the optimization of data-zone version? It seems that there is no too much essential difference from the original naive version. EDIT: My test environment is Intel Centrino Duo processor and the verion of gcc is 4.4 Any advice will be highly appreciated! Kind regards!

    Read the article

  • Android - Start service on boot

    - by Gady
    From everything I've seen on Stack Exchange and elsewhere, I have everything set up correctly to start an IntentService when Android OS boots. Unfortunately it is not starting on boot, and I'm not getting any errors. Maybe the experts can help... Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.phx.batterylogger" android:versionCode="1" android:versionName="1.0" android:installLocation="internalOnly"> <uses-sdk android:minSdkVersion="8" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.BATTERY_STATS" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <service android:name=".BatteryLogger"/> <receiver android:name=".StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> </manifest> BroadcastReceiver for Startup: package com.phx.batterylogger; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class StartupIntentReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Intent serviceIntent = new Intent(context, BatteryLogger.class); context.startService(serviceIntent); } } UPDATE: I tried just about all of the suggestions below, and I added logging such as Log.v("BatteryLogger", "Got to onReceive, about to start service"); to the onReceive handler of the StartupIntentReceiver, and nothing is ever logged. So it isn't even making it to the BroadcastReceiver. I think I'm deploying the APK and testing correctly, just running Debug in Eclipse and the console says it successfully installs it to my Xoom tablet at \BatteryLogger\bin\BatteryLogger.apk. Then to test, I reboot the tablet and then look at the logs in DDMS and check the Running Services in the OS settings. Does this all sound correct, or am I missing something? Again, any help is much appreciated.

    Read the article

  • A SelfHosted WCF Service over Basic HTTP Binding doesn't support more than 1000 concurrent requests

    - by Krishnan
    I have self hosted a WCF Service over BasicHttpBinding consumed by an ASMX Client. I'm simulating a concurrent user load of 1200 users. The service method takes a string parameter and returns a string. The data exchanged is less than 10KB. The processing time for a request is fixed at 2 seconds by having a Thread.Sleep(2000) statement. Nothing additional. I have removed all the DB Hits / business logic. The same piece of code runs fine for 1000 concurrent users. I get the following error when I bump up the number to 1200 users. System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at WCF.Throttling.Client.Service.Function2(String param) This exception is often reported on DataContract mismatch and large data exchange. But never when doing a load test. I have browsed enough and have tried most of the options which include, Enabled Trace & Message log on server side. But no errors logged. To overcome Port Exhaustion MaxUserPort is set to 65535, and TcpTimedWaitDelay 30 secs. MaxConcurrent Calls is set to 600, and MaxConcurrentInstances is set to 1200. The Open, Close, Send and Receive Timeouts are set to 10 Minutes. The HTTPWebRequest KeepAlive set to false. I have not been able to nail down the issue for the past two days. Any help would be appreciated. Thank you.

    Read the article

  • How can you make an emacs macro wait for cscope query results?

    - by Sudhanshu
    I am trying to write a macro which calls cscope-find-functions-calling-this-function on each and every tag in a file displayed in the *Tags List* buffer (created by list-tags command). This should create a buffer which contains list of all functions calling a set of functions defined in a certain file. This is the sequence of keystrokes: 1. <f11> ;; cscope-find-functions-calling-this-function 2. RET ;; newline [shows results of cscope in a split window] 3. C-x C-p ;; mark-page 4. C-x C-x ;; icicle-exchange-point-and-mark 5. <up> ;; previous-line 6. <end> ;; end-of-line [region to copy has been marked] 7. <f7> ;; append-results-to-buffer 8. C-x ESC O ;; [move back to split window on the right] 9. C-x b ;; icicle-buffer [Switch back to *Tags List* buffer] 10. *Tags ;; self-insert-command * 5 11. SPC ;; self-insert-command 12. List* ;; self-insert-command * 5 13. RET ;; newline 14 . <down> ;; next-line [Position point on next tag in the list] Problem: I get no results in the buffer, and I found out that's because Step 3-7 execute even before cscope prints the results of query made on Steps 1-2. I can insert a pause in the macro by using C-x q, but I'd rather like the macro to wait after Step 2, until cscope has returned with the results and only then continue further. I suspect this is not possible through a macro, maybe a LISP function... I'm not a lisp expert myself. Can someone please help? Thanks! Details: I have Icicles installed so by default I get word at point in current buffer as input in minibuffer. F11 is bound to cscope-find-functions-calling-this-function windmove is installed and C-x (C-x ESC o - as shown below) takes you to the right window. F7 is bound to append-results-to-buffer which is defined as: (defun append-results-to-buffer () (interactive) (append-to-buffer (get-buffer-create "c1") (point) (mark))) This function just appends the currently marked region to a buffer named "c1".

    Read the article

  • Emails sent using Flex app are delayed

    - by user363825
    I'm currently building an application in Flex that utilizes SMTP Mailer to automatically send out emails to the user when a particular condition is satisfied. The application checks this condition every 30 seconds. The condition is satisfied based on new records being returned from a database table. The problem is as follows: When the condition is first satisfied, the email is delivered to the user with no issues. The second time the condition is satisfied, the email is not delivered. In the smtp logs, the delivery attempt appears to get hung up on the following line: 354 Start mail input; end with <CRLF>.<CRLF> No error codes are present in the smtp logs, but I do trace the following event from the SMTP Mailer class: [Event type="mailError" bubbles=false cancelable=false eventPhase=2] When the condition is satisfied a third time, the email that was not delivered when the condition was satisfied the previous time is now delivered, along with the email for this instance. This pattern then repeats itself, with the next email not being sent followed by two emails being sent simulatneously when the condition is met again. The smtp server being used is Windows 2003, on an internal network. The email is being sent to an outlook account hosted on an exchange server that is also on this internal network. Here is the actionscript code that creates the SMTPMailer object: public var testMail:SMTPMailer = null; public function alertNotify() { Security.loadPolicyFile("crossdomain.xml"); this.testMail = new SMTPMailer("myserver.ec.local",25); this.testMail.addEventListener(SMTPEvent.MAIL_SENT, onEmailEvent); this.testMail.addEventListener(SMTPEvent.MAIL_ERROR, onEmailError); this.testMail.addEventListener(SMTPEvent.DISCONNECTED, onEmailConn); this.testMail.addEventListener(SecurityErrorEvent.SECURITY_ERROR, onEmailError); } Here is the code that creates the email body and calls the method to send the email: public function alertUser(emailAC:ArrayCollection):void { trace ("In alertUser() before send, testMail.connected = " + testMail.connected.toString()); var testStr:String = " Key Location Event Type Comment Update Time "; for each (var event:rEntity in emailAC) { testStr = testStr + "" + event.key.toString() + "" + event.xml.address.toString() + " " + [email protected]() + "" + [email protected]() + "" + [email protected]() + "" + event.xml.attribute("update-time").toXMLString() + ""; } testStr = testStr + ""; testMail.flush(); testMail.sendHTMLMail("[email protected]","[email protected]","Event Notification",testStr); } Really not sure where the email that gets hung up is being stored until it is finally sent.... Any suggestions as to how to begin to remedy this issue would be much appreciated.

    Read the article

  • polymorphism, inheritance in c# - base class calling overridden method?

    - by Andrew Johns
    This code doesn't work, but hopefully you'll get what I'm trying to achieve here. I've got a Money class, which I've taken from http://www.noticeablydifferent.com/CodeSamples/Money.aspx, and extended it a little to include currency conversion. The implementation for the actual conversion rate could be different in each project, so I decided to move the actual method for retrieving a conversion rate (GetCurrencyConversionRate) into a derived class, but the ConvertTo method contains code that would work for any implementation assuming the derived class has overriden GetCurrencyConversionRate so it made sense to me to keep it in the parent class? So what I'm trying to do is get an instance of SubMoney, and be able to call the .ConvertTo() method, which would in turn use the overriden GetCurrencyConversionRate, and return a new instance of SubMoney. The problem is, I'm not really understanding some concepts of polymorphism and inheritance yet, so not quite sure what I'm trying to do is even possible in the way I think it is, as what is currently happening is that I end up with an Exception where it has used the base GetCurrencyConversionRate method instead of the derived one. Something tells me I need to move the ConvertTo method down to the derived class, but this seems like I'll be duplicating code in multiple implementations, so surely there's a better way? public class Money { public CurrencyConversionRate { get { return GetCurrencyConversionRate(_regionInfo.ISOCurrencySymbol); } } public static decimal GetCurrencyConversionRate(string isoCurrencySymbol) { throw new Exception("Must override this method if you wish to use it."); } public Money ConvertTo(string cultureName) { // convert to base USD first by dividing current amount by it's exchange rate. Money someMoney = this; decimal conversionRate = this.CurrencyConversionRate; decimal convertedUSDAmount = Money.Divide(someMoney, conversionRate).Amount; // now convert to new currency CultureInfo cultureInfo = new CultureInfo(cultureName); RegionInfo regionInfo = new RegionInfo(cultureInfo.LCID); conversionRate = GetCurrencyConversionRate(regionInfo.ISOCurrencySymbol); decimal convertedAmount = convertedUSDAmount * conversionRate; Money convertedMoney = new Money(convertedAmount, cultureName); return convertedMoney; } } public class SubMoney { public SubMoney(decimal amount, string cultureName) : base(amount, cultureName) {} public static new decimal GetCurrencyConversionRate(string isoCurrencySymbol) { // This would get the conversion rate from some web or database source decimal result = new Decimal(2); return result; } }

    Read the article

  • AJAX, same-origin Policy and working XML Requests

    - by Joern
    Hello guys, so, currently I develop Widgets for Smartphones and am going a bit more advanced into fields of data exchange between client and server applications. My problem is: For my current project I want my client file to request data from a PHP script with the help of AJAX XmlHttpRequest and the POST method: function xmlRequestNotes() { var parameter = 'p=1234'; xmlhttp = new XMLHttpRequest(); xmlhttp.open("POST", url, true); // Http Header xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", parameter.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { json = JSON.parse(xmlhttp.responseText); // Doing Stuff with the Response } }; xmlhttp.send(parameter); } This works perfectly fine on my local server set up in XAMPP and the local Widget emulator. But if it gets onto the device (also with access to the target network) I receive the 101 Network Error. And as far as I have read, this is due to the "Same-Origin Policy" of XmlHttpRequests? My problem is to really understand that. Although the idea of this policy is clear to me, I'm a bit confused by the fact that another XmlHttpRequest for a Yahoo Weather XML Feed works fine. Now, could anyone be so helpful to enlighten me? Here is the request that returns a city name from Yahoo's weather feed: function getCityName() { xmlhttp = new XMLHttpRequest(); xmlhttp.open("GET", "http://weather.yahooapis.com/forecastrss?w=645458&u=c", true); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { xmlhttp.responseXML; var yweather = "http://xml.weather.yahoo.com/ns/rss/1.0"; alert(xmlhttp.responseXML.getElementsByTagNameNS(yweather, "location")[0].getAttribute("city")); } }; xmlhttp.send(null); } Obvious differences are the POST and GET methods for once, but seeing that the Same-Origin Policy takes effect no matter what method, I can't really make much sense of it. Why does the latter request work but not the first? I would really appreciate some help here. Greetings and a merry Christmas to you guys!

    Read the article

  • Supporting multiple instances of a plugin DLL with global data

    - by Bruno De Fraine
    Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten. I can see a number of solutions, but they all have some disadvantages: You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome. The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers. The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there? Can the stackoverflow community give some advice on these (or other) solutions?

    Read the article

  • Security review of an authenticated Diffie Hellman variant

    - by mtraut
    EDIT I'm still hoping for some advice on this, i tried to clarify my intentions... When i came upon device pairing in my mobile communication framework i studied a lot of papers on this topic and and also got some input from previous questions here. But, i didn't find a ready to implement protocol solution - so i invented a derivate and as i'm no crypto geek i'm not sure about the security caveats of the final solution: The main questions are Is SHA256 sufficient as a commit function? Is the addition of the shared secret as an authentication info in the commit string safe? What is the overall security of the 1024 bit group DH I assume at most 2^-24 bit probability of succesful MITM attack (because of 24 bit challenge). Is this plausible? What may be the most promising attack (besides ripping the device out off my numb, cold hands) This is the algorithm sketch For first time pairing, a solution proposed in "Key agreement in peer-to-peer wireless networks" (DH-SC) is implemented. I based it on a commitment derived from: A fix "UUID" for the communicating entity/role (128 bit, sent at protocol start, before commitment) The public DH key (192 bit private key, based on the 1024 bit Oakley group) A 24 bit random challenge Commit is computed using SHA256 c = sha256( UUID || DH pub || Chall) Both parties exchange this commitment, open and transfer the plain content of the above values. The 24 bit random is displayed to the user for manual authentication DH session key (128 bytes, see above) is computed When the user opts for persistent pairing, the session key is stored with the remote UUID as a shared secret Next time devices connect, commit is computed by additionally hashing the previous DH session key before the random challenge. For sure it is not transfered when opening. c = sha256( UUID || DH pub || DH sess || Chall) Now the user is not bothered authenticating when the local party can derive the same commitment using his own, stored previous DH session key. After succesful connection the new DH session key becomes the new shared secret. As this does not exactly fit the protocols i found so far (and as such their security proofs), i'd be very interested to get an opinion from some more crypto enabled guys here. BTW. i did read about the "EKE" protocol, but i'm not sure what the extra security level is.

    Read the article

  • What IPC method should I use between Firefox extension and C# code running on the same machine?

    - by Rory
    I have a question about how to structure communication between a (new) Firefox extension and existing C# code. The firefox extension will use configuration data and will produce other data, so needs to get the config data from somewhere and save it's output somewhere. The data is produced/consumed by existing C# code, so I need to decide how the extension should interact with the C# code. Some pertinent factors: It's only running on windows, in a relatively controlled corporate environment. I have a windows service running on the machine, built in C#. Storing the data in a local datastore (like sqlite) would be useful for other reasons. The volume of data is low, e.g. 10kb of uncompressed xml every few minutes, and isn't very 'chatty'. The data exchange can be asynchronous for the most part if not completely. As with all projects, I have limited resources so want an option that's relatively easy. It doesn't have to be ultra-high performance, but shouldn't add significant overhead. I'm planning on building the extension in javascript (although could be convinced otherwise if really necessary) Some options I'm considering: use an XPCOM to .NET/COM bridge use a sqlite db: the extension would read from and save to it. The c# code would run in the service, populating the db and then processing data created by the service. use TCP sockets to communicate between the extension and the service. Let the service manage a local data store. My problem with (1) is I think this will be tricky and not so easy. But I could be completely wrong? The main problem I see with (2) is the locking of sqlite: only a single process can write data at a time so there'd be some blocking. However, it would be nice generally to have a local datastore so this is an attractive option if the performance impact isn't too great. I don't know whether (3) would be particularly easy or hard ... or what approach to take on the protocol: something custom or http. Any comments on these ideas or other suggestions? UPDATE: I was planning on building the extension in javascript rather than c++

    Read the article

  • Maximum nametable char count exceeded

    - by doc
    I'm having issues with the maximum nametable char count quota, I followed a couple of answers here and it solved the problem for a while, but now I'm having the same issue. My Server side config is as follows: <system.serviceModel> <bindings> <netTcpBinding> <binding name="GenericBinding" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None" /> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="1000000" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="REMWCF.RemWCFSvc"> <endpoint address="" binding="netTcpBinding" contract="REMWCF.IRemWCFSvc" bindingConfiguration="GenericBinding" /> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9081/RemWCFSvc" /> </baseAddresses> </host> </service> </services> </system.serviceModel> I also have the same tcp binding on the devenv configuration. Have I reached the limit of contracts supported? Is there a way to turn off that quota? EDIT Error Message: Error: Cannot obtain Metadata from net.tcp://localhost:9081/RemWCFSvc/mex If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: net.tcp://localhost:9081/RemWCFSvc/mex Metadata contains a reference that cannot be resolved: 'net.tcp://localhost:9081/RemWCFSvc/mex'. There is an error in the XML document. The maximum nametable character count quota (16384) has been exceeded while reading XML data. The nametable is a data structure used to store strings encountered during XML processing - long XML documents with non-repeating element names, attribute names and attribute values may trigger this quota. This quota may be increased by changing the MaxNameTableCharCount property on the XmlDictionaryReaderQuotas object used when creating the XML reader. I'm getting that error when trying to run the WCF (which is hosted in a windows service app).

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • Directly Jump to another C++ function

    - by maligree
    I'm porting a small academic OS from TriCore to ARM Cortex (Thumb-2 instruction set). For the scheduler to work, I sometimes need to JUMP directly to another function without modifying the stack nor the link register. On TriCore (or, rather, on tricore-g++), this wrapper template (for any three-argument-function) works: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { typedef void (* __attribute__((interrupt_handler)) Jump3)( A1, A2, A3); ( (Jump3)func )( a1, a2, a3 ); } //example for using the template: JUMP3( superDispatch, this, me, next ); This would generate the assembler instruction J (a.k.a. JUMP) instead of CALL, leaving the stack and CSAs unchanged when jumping to the (otherwise normal) C++ function superDispatch(SchedulerImplementation* obj, Task::Id from, Task::Id to). Now I need an equivalent behaviour on ARM Cortex (or, rather, for arm-none-linux-gnueabi-g++), i.e. generate a B (a.k.a. BRANCH) instruction instead of BLX (a.k.a. BRANCH with link and exchange). But there is no interrupt_handler attribute for arm-g++ and I could not find any equivalent attribute. So I tried to resort to asm volatile and writing the asm code directly: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { asm volatile ( "mov.w r0, %1;" "mov.w r1, %2;" "mov.w r2, %3;" "b %0;" : : "r"(func), "r"(a1), "r"(a2), "r"(a3) : "r0", "r1", "r2" ); } So far, so good, in my theory, at least. Thumb-2 requires function arguments to be passed in the registers, i.e. r0..r2 in this case, so it should work. But then the linker dies with undefined reference to `r6' on the closing bracket of the asm statement ... and I don't know what to make of it. OK, I'm not the expert in C++, and the asm syntax is not very straightforward... so has anybody got a hint for me? A hint to the correct __attribute__ for arm-g++ would be one way, a hint to fix the asm code would be another. Another way maybe would be to tell the compiler that a1..a3 should already be in the registers r0..r2 when the asm statement is entered (I looked into that a bit, but did not find any hint).

    Read the article

  • How solve consumer/producer task using semaphores

    - by user1074896
    I have SimpleProducerConsumer class that illustrate consumer/producer problem (I am not sure that it's correct). public class SimpleProducerConsumer { private Stack<Object> stack = new Stack<Object>(); private static final int STACK_MAX_SIZE = 10; public static void main(String[] args) { SimpleProducerConsumer pc = new SimpleProducerConsumer(); new Thread(pc.new Producer(), "p1").start(); new Thread(pc.new Producer(), "p2").start(); new Thread(pc.new Consumer(), "c1").start(); new Thread(pc.new Consumer(), "c2").start(); new Thread(pc.new Consumer(), "c3").start(); } public synchronized void push(Object d) { while (stack.size() >= STACK_MAX_SIZE) try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } stack.push(new Object()); System.out.println("push " + Thread.currentThread().getName() + " " + stack.size()); notify(); } public synchronized Object pop() { while (stack.size() == 0) try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } try { Thread.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stack.pop(); System.out.println("pop " + Thread.currentThread().getName() + " " + stack.size()); notify(); return null; } class Consumer implements Runnable { @Override public void run() { while (true) { pop(); } } } class Producer implements Runnable { @Override public void run() { while (true) { push(new Object()); } } } } I found simple realization of semaphore(here:http://docs.oracle.com/javase/tutorial/essential/concurrency/guardmeth.html I know that there is concurrency package) How I need to change code to exchange java objects monitors to my custom semaphore. (To illustrate C/P problem using semaphores) Semaphore: class Semaphore { private int counter; public Semaphore() { this(0); } public Semaphore(int i) { if (i < 0) throw new IllegalArgumentException(i + " < 0"); counter = i; } public synchronized void release() { if (counter == 0) { notify(); } counter++; } public synchronized void acquire() throws InterruptedException { while (counter == 0) { wait(); } counter--; } }

    Read the article

  • Php pdo_dblib - cannot find/unable to load freetds

    - by MaxPowers
    Self-hosted box, RHEL 6 PHP 5.3.3 PDO installed freetds installed pdo_dblib - so far no luck installing My goal is to use PDO with sybase. Attempting to install pdo_dblib from the appropriate version php source code. I have tried a variety of methods and searched quite a bit for help on this topic, but have yet to be successful. Method 1 Install freetds $ ./configure $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure $ make $ make test Error output: PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 That doesn't look good...I researched this and found an interesting hack for this here. But changing pdo.ini to pdo_0.ini was not the solution, as I still got the same errors on make test. $ su $ make install Output: Installing shared extensions: /usr/lib64/php/modules/ That seems strange...and no, it doesn't actually install (not showing up on phpinfo after apache restart). Method 2 Install freetds following the instructions exactly, i add the prefix $ ./configure --prefix=/usr/local/freetds $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure --with-sybase=/usr/local/freetds This produces the following error at the bottom of the output ... checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories Method 3 freetds ./configure variation (including or not include the --prefix...) did not change the result of this so I'll skip it. Install pdo_dblib pecl extension following the method specified here. pecl download pdo_dblib tar -xzvf PDO_DBLIB-1.0.tgz Removed the line, <dep type=”ext” rel=”ge” version=”1.0?>pdo</dep> Saved the package.xml file, and moved it in to the PDO_DBLIB directory. mv package.xml ./PDO_DBLIB-1.0 Navigated to the PDO_DBLIB directory, then installed the package from the directory. cd ./PDO_DBLIB-1.0 pecl install package.xml But, this command gives me the following error output, same as Method 2. checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories ERROR: `/home/sybase/Install_items/pecl_pdo_dblib/PDO_DBLIB-1.0/configure' failed

    Read the article

  • Getting hp-snmp-agents on HP ProLiant DL360 on Lenny working

    - by mark
    After receiving our HP ProLiant DL360 I'd like to integrate the machine into our Munin system and thus enable ProLiant specific information to be exposed via SNMP. I'm running Debian Lenny with kernel 2.6.26-2-vserver-amd64 . I've followed http://downloads.linux.hp.com/SDR/getting_started.html and the HP repository has been added to /etc/apt/sources.list.d/HP-ProLiantSupportPack.list . Setting up Lenny SNMP itself is not a problem, I configure it to have a public v1 community string to read all data and it works. I install hp-snmp-agents and run hpsnmpconfig and it adds additional lines to the top of /etc/snmp/snmpd.conf : dlmod cmaX /usr/lib64/libcmaX64.so snmpd gets restarted. Via lsof I can see that libcmaX64 was loaded and is used by snmpd, put I do not get any additional information out of snmp. I use snmpwalk -v 1 -c public ... and I can see many OIDs but I do not see the new ones I'd expect, most notably temperatures, fan speed and such. The OIDs I'm expecting are e.g. 1.3.6.1.4.1.232.6.2.6.8.1.4.1 , this is from the existing munin plugins from http://exchange.munin-monitoring.org/plugins/snmp__hp_temp/version/1 . snmpd[19007]: cmaX: Parsing shared as a type was unsucessful snmpd[19007]: cmaX: listening for subagents on port 25375 snmpd[19007]: cmaX: subMIB 1 handler has disconnected snmpd[19007]: cmaX: subMIB 2 handler has disconnected snmpd[19007]: cmaX: subMIB 3 handler has disconnected snmpd[19007]: cmaX: subMIB 5 handler has disconnected snmpd[19007]: cmaX: subMIB 6 handler has disconnected snmpd[19007]: cmaX: subMIB 8 handler has disconnected snmpd[19007]: cmaX: subMIB 9 handler has disconnected snmpd[19007]: cmaX: sent ColdStarts on ports 25376 to 25393 snmpd[19007]: cmaX: subMIB 10 handler has disconnected snmpd[19007]: cmaX: subMIB 11 handler has disconnected snmpd[19007]: cmaX: subMIB 14 handler has disconnected snmpd[19007]: cmaX: subMIB 15 handler has disconnected snmpd[19007]: cmaX: subMIB 16 handler has disconnected snmpd[19007]: cmaX: subMIB 21 handler has disconnected snmpd[19007]: cmaX: subMIB 22 handler has disconnected snmpd[19007]: cmaX: subMIB 23 handler has disconnected snmpd[19007]: cmaX: subMIB 1 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 2 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 3 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 5 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 6 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 8 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 9 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 10 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 11 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 14 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 15 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 16 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 21 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 22 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 23 will be sent on port 25376 to hp Advanced Server Management_Peer snmpd[19007]: cmaX: subMIB 18 handler has disconnected snmpd[19007]: cmaX: subMIB 18 will be sent on port 25393 to cpqnicd snmpd[19007]: NET-SNMP version 5.4.1 This doesn't look particular bad for me, it's just informational I guess. I've compared the walking OID output with and without the module and there's no difference in the OID served back at all. Are there any other prerequisites I'm missing? I've also noticed that from the time I installed hp-snmp-agents it adds a lot of additional daemons and that my load suddenly jumps to 1. I've uninstalled the package for now. Is this expected behavior?

    Read the article

  • Can't send mail from Windows Phone (Postfix server)

    - by Dominic Williams
    Some background: I have a Dovecot/Postfix setup to handle email for a few domains. We have imap and smtp setup on various devices (Macs, iPhones, PCs, etc) and it works no problem. I've recently bought a Windows Phone and I'm trying to setup the mail account on there. I've got the imap part working great but for some reason it won't send mail. mail.log with debug_peer_list I've put this on pastebin because its quite long: http://pastebin.com/KdvMDxTL dovecot.log with verbose_ssl Apr 14 22:43:50 imap-login: Warning: SSL: where=0x10, ret=1: before/accept initialization [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: before/accept initialization [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read client hello A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write server hello A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write server done A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 flush data [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=-1: SSLv3 read client certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=-1: SSLv3 read client certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read client key exchange A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read finished A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write change cipher spec A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write finished A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 flush data [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x20, ret=1: SSL negotiation finished successfully [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=1: SSL negotiation finished successfully [109.151.23.129] Apr 14 22:43:51 imap-login: Info: Login: user=<pixelfolio>, method=PLAIN, rip=109.151.23.129, lip=94.23.254.175, mpid=24390, TLS Apr 14 22:43:53 imap(pixelfolio): Info: Disconnected: Logged out bytes=9/331 Apr 14 22:43:53 imap-login: Warning: SSL alert: where=0x4008, ret=256: warning close notify [109.151.23.129] postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix debug_peer_list = 109.151.23.129 inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 message_size_limit = 50240000 milter_default_action = accept milter_protocol = 2 mydestination = ks383809.kimsufi.com, localhost.kimsufi.com, localhost myhostname = ks383809.kimsufi.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname non_smtpd_milters = inet:127.0.0.1:8891,inet:localhost:8892 readme_directory = no recipient_delimiter = + smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_milters = inet:127.0.0.1:8891,inet:localhost:8892 smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_domains = domz.co.uk ruck.in vjgary.co.uk scriptees.co.uk pixelfolio.co.uk filmtees.co.uk nbsbar.co.uk virtual_alias_maps = hash:/etc/postfix/alias_maps doveconf -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 2.6.38.2-grsec-xxxx-grs-ipv6-64 x86_64 Ubuntu 11.10 auth_mechanisms = plain login log_path = /var/log/dovecot.log mail_location = mbox:~/mail/:INBOX=/var/mail/%u passdb { driver = pam } protocols = imap service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } verbose_ssl = yes Any suggestions or help greatly appreciated. I've been pulling my hair out with this for hours! EDIT This seems to be my exact problem, but I already have broken_sasl set to yes and the 'login' auth mechanism added? http://forums.gentoo.org/viewtopic-t-898610-start-0.html

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • Real server, Multiple IP Addresses, HyperV Virtual Server, How to partition IPs across real and Virtual NICs

    - by Steven_W
    This is a slightly difficult problem to explain without same basic background information - I'll try and refine the question later as necessary Originally, I have a single hosted server (Win 2008R2) with the following range of 8 IP addresses. - Single NIC - IP: x.x.128.72 -> x.x.128.79 - Subnet: x.x.255.192 - GW: x.x.128.65 After installing Hyper-V and setting up a single virtual server on the same box, I then wanted to assign one of the IP addresses to the virtual server, leaving everything else running normally. -- Firstly, I tried using the "External" network, but (even after setting IPs on the "Virtual Adapter" similar to Here but struggled to get networking running at all. I needed to keep the server running (otherwise I would have spent more time pursuing this approach) Q1 ... Was this a sensible thing to do ? Should I have carried on down this route ? -- I then decided to try different approach - Set the HyperV network to "Internal" (visible to Management OS) - Physical NIC - IP: x.x.128.72 -> x.x.128.75 - Subnet: x.x.255.192 - GW: x.x.128.65 - Virtual NIC - IP: x.x.128.78 - Subnet: x.x.255.252 - GW: x.x.128.72 ... { The same as the IP of the physical NIC ) - Virtual OS-NIC - IP: x.x.128.77 - Subnet: x.x.255.252 - GW: x.x.128.78 ... { The same as the IP of the host virtual-NIC ) -- Surprisingly enough, this approach actually worked, and I was able to connect from all the following: - Internet to/from physical NIC (x.x.128.72) - physical NIC (x.x.128.72) to virtual-OS-NIC (x.x.128.77) e.g. testing via ping + FTP - Internet to/from virtual-OS-NIC (x.x.128.72) -- The problem I have is that this approach seems to only last for a short while (a few hours). After this time, it seems that I lose the ability to connect from Virtual-OS-NIC to/from the internet (but I can still connect from the host-OS to the virtual-OS and from the host-OS to the internet) I have re-tested this a couple of times with the same results ... I leave the server on for a few hours (e.g. overnight), and when I come back in the morning, the Virtual-OS loses the ability to route to the internet -- I'm not quite sure what to look at next (or whether I'm going about this completely the wrong way ) One "possible relevant item" is that the host-OS is also running RRAS (Routing and Remote Access), but this is only to run a simple VPN -- Q2 - Wheat should I be looking at next ? (Any good references / recommendations of what to try) Would appreciate any thoughts or comments (even if you tell me I'm going about this the wrong way)

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >