Search Results

Search found 11364 results on 455 pages for 'port blocking'.

Page 276/455 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • Video-codec rater by image comparison algorithm?

    - by Andreas Hornig
    Hi, perhaps anyone knows if this is possible. comparing image quality is almost imposible to describe without subjective influences. When someone rates an image quality as good there is at least one person, that doesn't think so. human preferences are always different. So, I would like to know if there is away to "rate" the image quality by an algorithm that compares the original image to the produced one in following issues colour change(difference pixel by pixel blur rate artifacts and macroblocking the first one would be the easiest one because you could check just the diffeence in colours and can give 3 values in +- of each hex-value both last once I don't know if this is possible, but the blocking could be detected by edge-finding. and the king's quest would be to do that for more then just one image, because video is done with several frames. perhaps you expert programmers could tell me, if such an automated algo can be done to bring some objective measurement divice into rating image quality. this could perhaps calm down some h.264 is better than x264 and better than vp8 and blaaah people :) Andreas 1st posted here http://www.hdtvtotal.com/index.php?name=PNphpBB2&file=viewtopic&p=9705

    Read the article

  • OpenGL/Carbon/Cocoa Memory Management Autorelease issue

    - by Stephen Furlani
    Hoooboy, I've got another doozy of a memory problem. I'm creating a Carbon (AGL) Window, in C++ and it's telling me that I'm autorelease-ing it without a pool in place. uh... what? I thought Carbon existed outside of the NSAutoreleasePool... When I call glEnable(GL_TEXTURE_2D) to do some stuff, it gives me a EXC_BAD_ACCESS warning - but if the AGL Window is never getting release'd, then shouldn't it exist? Setting set objc-non-blocking-mode at (gdb) doesn't make the problem go away. So I guess my question is WHAT IS UP WITH CARBON/COCOA/NSAutoreleasePool? And... are there any resources for Objective-C++? Because crap like this keeps happening to me. Thanks, -Stephen --- CODE --- Test Draw Function void Channel::frameDraw( const uint32_t frameID) { eq::Channel::frameDraw( frameID ); getWindow()->makeCurrent(false); glEnable(GL_TEXTURE_2D); // Throws Error Here } Make Current (Equalizer API from Eyescale) void Window::makeCurrent( const bool useCache ) const { if( useCache && getPipe()->isCurrent( this )) return; _osWindow->makeCurrent(); } void AGLWindow::makeCurrent() const { aglSetCurrentContext( _aglContext ); AGLWindowIF::makeCurrent(); if( _aglContext ) { EQ_GL_ERROR( "After aglSetCurrentContext" ); } } _aglContext is a valid memory location (i.e. not NULL) when I step through. -S!

    Read the article

  • Managing lots of callback recursion in Nodejs

    - by Maciek
    In Nodejs, there are virtually no blocking I/O operations. This means that almost all nodejs IO code involves many callbacks. This applies to reading and writing to/from databases, files, processes, etc. A typical example of this is the following: var useFile = function(filename,callback){ posix.stat(filename).addCallback(function (stats) { posix.open(filename, process.O_RDONLY, 0666).addCallback(function (fd) { posix.read(fd, stats.size, 0).addCallback(function(contents){ callback(contents); }); }); }); }; ... useFile("test.data",function(data){ // use data.. }); I am anticipating writing code that will make many IO operations, so I expect to be writing many callbacks. I'm quite comfortable with using callbacks, but I'm worried about all the recursion. Am I in danger of running into too much recursion and blowing through a stack somewhere? If I make thousands of individual writes to my key-value store with thousands of callbacks, will my program eventually crash? Am I misunderstanding or underestimating the impact? If not, is there a way to get around this while still using Nodejs' callback coding style?

    Read the article

  • Asynchronous crawling F#

    - by jlezard
    When crawling on webpages I need to be careful as to not make too many requests to the same domain, for example I want to put 1 s between requests. From what I understand it is the time between requests that is important. So to speed things up I want to use async workflows in F#, the idea being make your requests with 1 sec interval but avoid blocking things while waiting for request response. let getHtmlPrimitiveAsyncTimer (uri : System.Uri) (timer:int) = async{ let req = (WebRequest.Create(uri)) :?> HttpWebRequest req.UserAgent<-"Mozilla" try Thread.Sleep(timer) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response") use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> return "Bad Link" } Then I do something like: let uri1 = System.Uri "http://rue89.com" let timer = 1000 let jobs = [|for i in 1..10 -> getHtmlPrimitiveAsyncTimer uri1 timer|] jobs |> Array.mapi(fun i job -> Console.WriteLine("Starting job "+string i) Async.StartAsTask(job).Result) Is this alright ? I am very unsure about 2 things: -Does the Thread.Sleep thing work for delaying the request ? -Is using StartTask a problem ? I am a beginner (as you may have noticed) in F# (coding in general actually ), and everything envolving Threads scares me :) Thanks !!

    Read the article

  • eventmachine and external scripts via backticks

    - by Maciek
    I have a small HTTP server script I've written using eventmachine which needs to call external scripts/commands and does so via backticks (``). When serving up requests which don't run backticked code, everything is fine, however, as soon as my EM code executes any backticked external script, it stops serving requests and stops executing in general. I noticed eventmachine seems to be sensitive to sub-processes and/or threads, and appears to have the popen method for this purpose, but EM's source warns that this method doesn't work under Windows. Many of the machines running this script are running Windows, so I can't use popen. Am I out of luck here? Is there a safe way to run an external command from an eventmachine script under Windows? Is there any way I could fire off some commands to be run externally without blocking EM's execution? edit: the culprit that seems to be screwing up EM the most is my usage of the Windows start command, as in: start java myclass. The reason I'm using start is because I want those external scripts to start running and keep running after the EM request is served

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

  • How to find minimum cut-sets for several subgraphs of a graph of degrees 2 to 4

    - by Tore
    I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices that act as soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally the same problem. Given a game that looks like this, what i want to do is find enclosures so that i can properly find entrances to them and thus get a good heuristic for reaching vertices inside these enclosures. So what i want is to find these colored regions on any given map. The reason for me bothering to do this and not just staying content with the performance of a simple manhattan distance heuristic is that an enclosure heuristic can give more optimal results and i would not have to actually do the A* to get some proper distance calculations and also for later adding competitive blocking of opponents within these enclosures when playing sokoban type games. Also the enclosure heuristic can be used for a minimax approach to finding goal vertices more properly. Do you know of a good algorithm for solving this problem or have any suggestions in things i should explore?

    Read the article

  • cant get connexion using MongoTor

    - by Abdelouahab Pp
    i was trying to change my code to make it asynchronous using MongoTor here is my simple code: class BaseHandler(tornado.web.RequestHandler): @property def db(self): if not hasattr(self,"_db"): _db = Database.connect('localhost:27017', 'essog') return _db @property def fs(self): if not hasattr(BaseHandler,"_fs"): _fs = gridfs.GridFS(self.db) return _fs class LoginHandler(BaseHandler): @tornado.web.asynchronous @tornado.gen.engine def post(self): email = self.get_argument("email") password = self.get_argument("pass1") try: search = yield tornado.gen.Task(self.db.users.find, {"prs.mail":email}) .... i got this error: Traceback (most recent call last): File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\web.py", line 1043, in _stack_context_handle_exception raise_exc_info((type, value, traceback)) File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\web.py", line 1162, in wrapper return method(self, *args, **kwargs) File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\gen.py", line 122, in wrapper runner.run() File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\gen.py", line 365, in run yielded = self.gen.send(next) File "G:\Mon projet\essog\handlers.py", line 92, in post search = yield tornado.gen.Task(self.db.users.find, {"prs.mail":email}) File "G:\Mon projet\essog\handlers.py", line 62, in db _db = Database.connect('localhost:27017', 'essog') File "build\bdist.win-amd64\egg\mongotor\database.py", line 131, in connect database.init(addresses, dbname, read_preference, **kwargs) File "build\bdist.win-amd64\egg\mongotor\database.py", line 62, in init ioloop_is_running = IOLoop.instance().running() AttributeError: 'SelectIOLoop' object has no attribute 'running' ERROR:tornado.access:500 POST /login (::1) 3.00ms and, excuse me for this other question, but how do i make distinct in this case? here is what worked in blocking mode: search = self.db.users.find({"prs.mail":email}).distinct("prs.mail")[0] Update: it seems that this error happenes when there is no Tornado running! it's the same error raised when using only the module in console. test = Database.connect("localhost:27017", "essog") --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ---- 1 test = Database.connect("localhost:27017", "essog") C:\Python27\lib\site-packages\mongotor-0.0.10-py2.7.egg\mongotor\database.pyc in connect(cls, addresses, dbname, read_preference, **kwargs) 131 132 database = Database() -- 133 database.init(addresses, dbname, read_preference, **kwargs) 134 135 return database C:\Python27\lib\site-packages\mongotor-0.0.10-py2.7.egg\mongotor\database.pyc in init(self, addresses, dbname, read_preference, **kwargs) 60 self._nodes.append(node) 61 --- 62 ioloop_is_running = IOLoop.instance().running() 63 self._config_nodes(callback=partial(self._on_config_node, ioloop_is_running)) 64 AttributeError: 'SelectIOLoop' object has no attribute 'running'

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • .NET TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • How can I create a WebBrowser control (ActiveX / IWebBrowser2) without a UI?

    - by wangminhere
    I cannot figure out how to use the WebBrowser control without having it create a window in the taskbar. I am using the IWebBrowser2 ActiveX control directly because I need to use some of the advanced features like blocking downloading JAVA/ActiveX/images etc. That apparently is not available in the WPF or winforms WebBrowser wrappers (but these wrappers do have the ability to create the control with no UI) Here is my code for creating the control: Type webbrowsertype = Type.GetTypeFromCLSID(Iid_Clsids.CLSID_WebBrowser, true); m_WBWebBrowser2 = (IWebBrowser2)System.Activator.CreateInstance(webbrowsertype); m_WBWebBrowser2.Visible = false; m_WBOleObject = (IOleObject)m_WBWebBrowser2; int iret = m_WBOleObject.SetClientSite(this); iret = m_WBOleObject.SetHostNames("me", string.Empty); tagRECT rect = new tagRECT(0, 0, 0, 0); tagMSG nullMsg = new tagMSG(); m_WBOleInPlaceObject = (IOleInPlaceObject)m_WBWebBrowser2; //INPLACEACTIVATE the WB iret = m_WBOleObject.DoVerb((int)OLEDOVERB.OLEIVERB_INPLACEACTIVATE, ref nullMsg, this, 0, IntPtr.Zero, ref rect); IConnectionPointContainer cpCont = (IConnectionPointContainer)m_WBWebBrowser2; Guid guid = typeof(DWebBrowserEvents2).GUID; IConnectionPoint m_WBConnectionPoint = null; cpCont.FindConnectionPoint(ref guid, out m_WBConnectionPoint); m_WBConnectionPoint.Advise(this, out m_dwCookie); This code works perfectly but it shows a window in the taskbar. If i omit the DoVerb(OLEDOVERB.OLEIVERB_INPLACEACTIVATE) call, then Navigating to a webpage is not working properly. Navigate() will not download everything on the page and it never fires the DocumentComplete event. If I add a DoVerb(OLEIVERB_HIDE) then I get the same behavior as if I omitted the DoVerb(OLEDOVERB.OLEIVERB_INPLACEACTIVATE) call. This seems like a pretty basic question but I couldn't find any examples anywhere.

    Read the article

  • Memory allocation in detached NSThread to load an NSDictionary in background?

    - by mobibob
    I am trying to launch a background thread to retrieve XML data from a web service. I developed it synchronously - without threads, so I know that part works. Now I am ready to have a non-blocking service by spawning a thread to wait for the response and parse. I created an NSAutoreleasePool inside the thread and release it at the end of the parsing. The code to spawn and the thread are as follows: Spawn from main-loop code: . . [NSThread detachNewThreadSelector:@selector(spawnRequestThread:) toTarget:self withObject:url]; . . Thread (inside 'self'): -(void) spawnRequestThread: (NSURL*) url { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; parser = [[NSXMLParser alloc] initWithContentsOfURL:url]; [self parseContentsOfResponse]; [parser release]; [pool release]; } The method parseContentsOfResponse fills an NSMutableDictionary with the parsed document contents. I would like to avoid moving the data around a lot and allocate it back in the main-loop that spawned the thread rather than making a copy. First, is that possible, and if not, can I simply pass in an allocated pointer from the main thread and allocate with 'dictionaryWithDictionary' method? That just seems so inefficient. Are there perferred designs?

    Read the article

  • twisted deferred/callbacks and asynchronous execution

    - by NetSkay
    hey guys, quick question about twisted and python... im trying to figure out how can i make my code more asynchronous using twisted and ive come to sort of a dead end, if a function of mine returns a deferred object, then i add a list of callbacks, the first callback will be called after the deferred function provides some result through deferred_obj.callback, then, in the chain of callbacks, the first callback will do something with the data and call the second callback and etc. however chained callbacks will not be considered asynchronous because they're chained and the event loop will keep firing each one of them concurrently until there is no more, right? however, if i have a deferred object, and i attach as its callback the deferred_obj.callback as in d.addCallback(deferred_obj.callback) then this will be considered asynchronous, because the deferred_obj is waiting for the data, and then the method that will pass the data is waiting on data as well, however once i d.callback 'd' object processes the data then it call deferred_obj.callback however since this object is deferred, unlike the case of chained callbacks, it will execute asynchronously... correct? meaning chained callbacks are NOT asynchronous while chained deferreds are, correct? thank you PS: assuming all of my code is non-blocking

    Read the article

  • Using Pylint with Django

    - by rcreswick
    I would very much like to integrate pylint into the build process for my python projects, but I have run into one show-stopper: One of the error types that I find extremely useful--:E1101: *%s %r has no %r member*--constantly reports errors when using common django fields, for example: E1101:125:get_user_tags: Class 'Tag' has no 'objects' member which is caused by this code: def get_user_tags(username): """ Gets all the tags that username has used. Returns a query set. """ return Tag.objects.filter( ## This line triggers the error. tagownership__users__username__exact=username).distinct() # Here is the Tag class, models.Model is provided by Django: class Tag(models.Model): """ Model for user-defined strings that help categorize Events on on a per-user basis. """ name = models.CharField(max_length=500, null=False, unique=True) def __unicode__(self): return self.name How can I tune Pylint to properly take fields such as objects into account? (I've also looked into the Django source, and I have been unable to find the implementation of objects, so I suspect it is not "just" a class field. On the other hand, I'm fairly new to python, so I may very well have overlooked something.) Edit: The only way I've found to tell pylint to not warn about these warnings is by blocking all errors of the type (E1101) which is not an acceptable solution, since that is (in my opinion) an extremely useful error. If there is another way, without augmenting the pylint source, please point me to specifics :) See here for a summary of the problems I've had with pychecker and pyflakes -- they've proven to be far to unstable for general use. (In pychecker's case, the crashes originated in the pychecker code -- not source it was loading/invoking.)

    Read the article

  • How can I return a value from GM_xmlhttprequest?

    - by GeoffreyF67
    I have this code here: var infiltrationResult; while(thisOption) { var trNode = document.createElement('tr'); var tdNode = document.createElement('td'); var hrefNode = document.createElement('a'); infPlanetID = thisOption.getAttribute('value'); var myURL = "http://www.hyperiums.com/servlet/Planetinf?securitylevel=90&newinfiltr=New+infiltration&planetid=" + PlanetID + "&infplanetid=" + infPlanetID; GM_xmlhttpRequest({ method: 'GET', url: myURL, headers: { 'User-agent': 'Mozilla/4.0 (compatible) Greasemonkey', 'Accept': 'application/atom+xml,application/xml,text/xml', }, onload: function(responseDetails) { if (responseDetails.responseText.match(/<b>Invalid order<\/td><\/tr><tr><td><BR><center><font color=#AAAA77 face=verdana,arial size=2>The target planet is blocking all infiltrations[\s\S]<BR><BR>/im)) { // Successful match infiltrationResult = 'Invalid Order'; } else { // Match attempt failed infiltrationResult = 'Infiltration Successfully Created'; } } }); When I add alert(infiltrationResult); right after it is assigned, I correctly see the string. However, after the function has exited, I have try the same alert and I get: undefined Any ideas what I'm doing wrong? I'm guessing it's some simple silly javascript synxtax I'm missing here :) G-Man

    Read the article

  • Monitor web sites visited using Internet Explorer, Opera, Chrome, Firefox and Safari in Python

    - by Zachary Brown
    I am working on a project for work and have seemed to run into a small problem. The project is a similar program to Web Nanny, but branded to my client's company. It will have features such as website blocking by URL, keyword and web activity logs. I would also need it to be able to "pause" downloads until an acceptable username and password is entered. I found a script to monitor the URL visited in Internet Explorer (shown below), but it seems to slow the browser down considerably. I have not found any support or ideas onhow to implement this in other browsers. So, my questions are: 1). How to I monitor other browser activity / visited URLs? 2). How do I prevent downloading unless an acceptable username and password is entered? from win32com.client import Dispatch,WithEvents import time,threading,pythoncom,sys stopEvent=threading.Event() class EventSink(object): def OnNavigateComplete2(self,*args): print "complete",args stopEvent.set() def waitUntilReady(ie): if ie.ReadyState!=4: while 1: print "waiting" pythoncom.PumpWaitingMessages() stopEvent.wait(.2) if stopEvent.isSet() or ie.ReadyState==4: stopEvent.clear() break; time.clock() ie=Dispatch('InternetExplorer.Application',EventSink) ev=WithEvents(ie,EventSink) ie.Visible=1 ie.Navigate("http://www.google.com") waitUntilReady(ie) print "location",ie.LocationName ie.Navigate("http://www.aol.com") waitUntilReady(ie) print "location",ie.LocationName print ie.LocationName,time.clock() print ie.ReadyState

    Read the article

  • Slow Databinding setup time in C# .NET 4.0

    - by Svisstack
    Hello, I have got a problem. I have windows forms application with dynamic generated layout, but i have a problem in performance. In this form i use DataBinding from .NET 4.0 and databinding after setup works fine, but he binding setup time for ONE control blocking my application on approx 0.7 second. I have some controls and time of binging setuping is around 2 minutes. I trying all possible solutions, I dont have any ideas without write self binding class. Why is wrong with my code? case "Boolean": { Binding b = new Binding("Checked", __bindingsource, __ep.Name); CheckBox cb = new CheckBox(); /* * HERE is the problem */ cb.DataBindings.Add(b); /* * HERE is the end of problem */ __flp.Controls.Add(cb); __bindingcontrol.AddBinding(b); break; } Without problem code lines all works fast and without binding ;-( but i want binding turn on in normal speed. PS1. I have suspended layout in generation time. PS2. I have same problem with binding TextBox'es, PictureBoxe's, CheckBox is only example. How to do that?

    Read the article

  • Mono doesn't write settings defaults

    - by Petar Minchev
    Hi guys! Here is my problem. If I use only one Windows Forms project and call only - Settings.Default.Save() when running it, Mono creates a user.config file with the default value for each setting. It is fine, so far so good. But now I add a class library project, which is referenced from the Windows Forms project and I move the settings from the Windows Forms project to the Class Library one. Now I do the same - Settings.Default.Save() and to my great surprise, Mono creates a user.config file with EMPTY values(NOT the default ones) for each setting?! What's the difference between having the settings in the Windows Forms Project or in the class library one? And by the way it is not a operating system issue. It is a Mono issue, because it doesn't work both under Windows and Linux. If I don't use Mono everything is fine, but I have to port my application to Linux, so I have to use Mono. I am really frustrated, it is blocking a project:( Thanks in advance for any suggestion you have. Regards, Petar

    Read the article

  • C# TcpClient, getting back the entire response from a telnet device

    - by Dan Bailiff
    I'm writing a configuration tool for a device that can communicate via telnet. The tool sends a command via TcpClient.GetStream().Write(...), and then checks for the device response via TcpClient.GetStream().ReadByte(). This works fine in unit tests or when I'm stepping through code slowly. If I run the config tool such that it performs multiple commands consecutively, then the behavior of the read is inconsistent. By inconsistent I mean sometimes the data is missing, incomplete or partially garbled. So even though the device performed the operation and responded, my code to check for the response was unreliable. I "fixed" this problem by adding a Thread.Sleep to make sure the read method waits long enough for the device to complete its job and send back the whole response (in this case 1.5 seconds was a safe amount). I feel like this is wrong, blocking everything for a fixed time, and I wonder if there is a better way to get the read method to wait only long enough to get the whole response from the device. private string Read() { if (!tcpClient.Connected) throw (new Exception("Read() failed: Telnet connection not available.")); StringBuilder sb = new StringBuilder(); do { ParseTelnet(ref sb); System.Threading.Thread.Sleep(1500); } while (tcpClient.Available > 0); return sb.ToString(); } private void ParseTelnet(ref StringBuilder sb) { while (tcpClient.Available > 0) { int input = tcpClient.GetStream().ReadByte(); switch (input) { // parse the input // ... do other things in special cases default: sb.Append((char)input); break; } } }

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

  • Winsock tcp/ip Socket listening but connection refused, race condition?

    - by Wayne
    Hello folks. This involves two automated unit tests which each start up a tcp/ip server that creates a non-blocking socket then bind()s and listen()s in a loop on select() for a client that connects and downloads some data. The catch is that they work perfectly when run separately but when run as a test suite, the second test client will fail to connect with WSACONNREFUSED... UNLESS there is a Thread.Sleep() of several seconds between them??!!! Interestingly, there is retry loop every 1 second for connecting after any failure. So the second test loops for a while until timeout after 10 minutes. During that time, netstat -na shows the correct port number is in the LISTEN state for the server socket. So if it is in the listen state? Why won't it accept the connection? In the code, there are log messages that show the select NEVER even gets a socket ready to read (which means ready to accept a connection when it applies to a listening socket). Obviously the problem must be related to some race condition between finishing one test which means close() and shutdown() on each end of the socket, and the start up of the next. This wouldn't be so bad if the retry logic allowed it to connect eventually after a couple of seconds. However it seems to get "gummed up" and won't even retry. However, for some strange reason the listening socket SAYS it's in the LISTEN state even through keeps refusing connections. So that means it's the Windoze O/S which is actually catching the SYN packet and returning a RST packet (which means "Connection Refused"). The only other time I ever saw this error was when the code had a problem that caused hundreds of sockets to get stuck in TIME_WAIT state. But that's not the case here. netstat shows only about a dozen sockets with only 1 or 2 in TIME_WAIT at any given moment. Please help.

    Read the article

  • PHP: Need help with simple XML

    - by Jack
    I am beginner in PHP. I am trying to parse this xml file. <relationship> <target> <following type="boolean">true</following> <followed_by type="boolean">true</followed_by> <screen_name>xxxx</screen_name> <id type="integer">xxxx</id> </target> <source> <notifications_enabled nil="true"/> <following type="boolean">true</following> <blocking nil="true"/> <followed_by type="boolean">true</followed_by> <screen_name>xxxx</screen_name> <id type="integer">xxxxx</id> </source> </relationship> I need to get the value of the field 'following type="boolean" ' for the target and here's my code - $xml = simplexml_load_string($response); foreach($xml->children() as $child) { if ($child->getName() == 'target') { foreach($child->children() as $child_1) if ( $child_1->getName() == 'following') { $is_my_friend = (bool)$child_1; break; } break; } } but I am not getting the correct output. I think the ' type="boolean" ' part of the field is creating problems. Please help.

    Read the article

  • Prevent Rails link_to_remote multiple submits w Javascript

    - by Chris
    In a Rails project I need to keep a link_to_remote from getting double-clicked. It looks like :before and :after are my only choices - they get prepended/appended to the onclick Ajax call, respectively. But if I try something like: :before => "self.stopObserving()" t,he Ajax is never run. If I try it for :after the Ajax is run but the link never stops observing. The solutions I've seen rely on creating a variable and blocking the whole form, but there are multiple link_to_remote rows on this page and it is valid to click more than one of them at a time - just not the same one twice. One variable per row declared outside of link_to_remote seems very kludgey... Instead of using Prototype I originally tried plain Javascript first for this proof of concept - but it fails too: <a href="#" onclick="self.onclick = function(){alert('foo');};"click</a just puts up an alert when clicked - the lambda here does nothing? This next one is more like the desired goal and should only alert the first time. But instead it alerts every time: <a href="#" onclick="alert('bar'); self.onclick = function(){return false;};"click</a All ideas appreciated!

    Read the article

  • Concurrent WCF calls via shared channel

    - by Kent Boogaart
    I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled. But they are not being called concurrently. If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I wanted to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible. I found this article on MSDN that states: While channels and clients created by the channels are thread-safe, they might not support writing more than one message to the wire concurrently. If you are sending large messages, particularly if streaming, the send operation might block waiting for another send to complete. Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages. Can anyone shed some light on this?

    Read the article

  • Replace low level web-service reference call transport with custom one

    - by hoodoos
    I'm not sure if title sounds right actually, so I will give more explanation here. I will begin from very beginning :) I'm using c# and .net for my development. I have an application that makes requests to some soap web-service and for each user request it produces 3 to 10 requests for web-service, they should all run async to finish in one time, so I use Async method of the web-service generated reference and then wait for result on callback. But it seems like it starts a thread (or takes it from pool) for every async call I make, so if I have 10 clients I got to spawn 30 to 100 threads and it sounds terrible even for my 16 cores server :) So i wanted to replace low level transport implementation with my own which uses non-blocking sockets and can handle at least 50 sockets run parallel in one thread with not much overhead. But I actually dunno where to put my override best. I analyzed System.Web.Services.Protocols.SoapHttpClientProtocol class and see that it has some GetWebRequest method which I actually could use. If only I could somehow interupt the object it creates and get a http request with all headers and body from there and then send it with my own sockets.. Any ideas what approach to use? Or maybe there's something built in the framework I can use?

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >