Search Results

Search found 20978 results on 840 pages for 'input stream'.

Page 3/840 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • No USB 04b4:0307 mike input after 10.04

    - by Papou
    I am using an USB phone that is in fact a so-called "sound card" based on the 04b4:0307 chip. In fact, I have two different phones using 04b4:0307 and in fact I have a sound USB key too. This, I believe, is the start of why they call 04b4:0307 "ubiquitous" (instead of oh four bee...). But not "eternal". The mike worked in Ubuntu 9.10 and 10.04 but not later ([email protected]). "not working" means that 04b4:0307 shows in Sound Preference but that its vu-meter is mute. I have posted the full lsusb and the result of tests in various systems here: http://www.papou.byethost9.com/tmp/1043601.html Note: Tests done on VirtualBox (thankfully). But UbuntUnity no longer works on VB, so I used the LinuxMint equivalent. ?hat's fate. I could not find any problem report close enough. What should be my next step? I believe the problem occurs in module snd-usb-audio. One thing I might try if I knew how is hacking a DEB with its 10.04's source. I can hack DEBs. Any hint welcome on how to make a DEB overriding a kernel packed module. I mean that the newly installed module should have precedence, be loaded instead, the module installed with the kernel. TIA !

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • Configuring keyboard input to eliminate unused diacritics

    - by David Cesarino
    I'd like to change the way diacritics work under Xubuntu. My problem My native language is pt-BR and my notebook has an american keyboard. Thus, I use ' and " followed by keys like u and c to achieve things like ú, ç and ü. It all works well. However, in the case of apostrophes and quotes, that creates a problem when I use ' followed by: letters that won't accept the acute accent ( ´ , ACUTE ACCENT -- 0x00B4) at all, like t; and letters that won't accept the acute in pt-BR, like r. In 1, ' with t does absolutely nothing. In 2, ' with r creates r (LATIN SMALL LETTER R WITH ACUTE -- 0x0155), which is used, afaik, for some eastern european languages like slovak. It isn't used in portuguese, just like ?, s, z, ?, ?, ?, n and all consonants (portuguese do not take diacritics in consonants). Question That said, is there a way to better support the portuguese-brazilian language using an american keyboard? It is very common here --- I actually prefer the american keyboard to our own, known as “ABNT”. Desired solution I'd like to deactivate unused diacritics, so case 2 would behave just like case 1. Additionally, if possible, I'd like case 1 to behave like it does under Windows. As an example, typing ' followed by t should write 't (acute followed by T) instead of doing nothing. About 2, in my humble opinion, doing nothing is counterproductive. I realize the behavior is reasonable according to logic ("there isn't t-acute, so please tell the computer to typeset apostrophe --- ', SPACE --- instead of acute). But from a human, practical point of view, I think it makes more sense (to me, at least). Additional comments I believe this also applies to spanish, french, italian and other western european latin languages. On the console (Ctrl+Alt+F?), case 1 is not a problem. I don't need to press space as apostrophes are automatically added. However, there, I'm unable to access cedilla (ç). Two completely different behaviors. If it's just a matter of customizing text config files (possibly creating a custom layout or whatever), I'd be glad to share my efforts. I just need guide on the "howto" part. Somehow Google only points me to the "enough" people (those who cope with the situation and think that it works "well enough"). And since I have definitely migrated to Linux/Xubuntu after years, I'd like to leave just as I like it (and I'm sure others as well). For example, if there is some kind of scripting or definition to tell the computer to do what I described, so be it.

    Read the article

  • Game Input mouse filtering

    - by aaron
    I'm having a problem with filtering mouse inputs, the method I am doing right know moves the cursor back to the center of the screen each frame. But I cant do this because it messes with other things. Does anyone know how to implement this with delta mouse movement. Here is the relevant code. void update() { static float oldX = 0; static float oldY = 0; static float walkSpeed = .05f; static float sensitivity = 0.002f;//mouse sensitivity static float smooth = 0.7f;//mouse smoothing (0.0 - 0.99) float w = ScreenResolution.x/2.0f; float h = ScreenResolution.y/2.0f; Vec2f scrc(w,h); Vec2f mpos(getMouseX(),getMouseY()); float x = scrc.x-mpos.x; float y = scrc.y-mpos.y; oldX = (oldX*smooth + x*(1.0-smooth)); oldY = (oldY*smooth + y*(1.0-smooth)); x = oldX * sensitivity; y = oldY * sensitivity; camera->rotate(Vec3f(y,0,0)); transform->setRotation(transform->getRotation()*Quaternionf::fromAxisAngle(0.0f,1.0f,0.0f,-x)); setMousePosition((int)scrc.x,(int)scrc.y);//THIS IS THE PROBLEM LINE HOW CAN I AVOID THIS .... }

    Read the article

  • Input of mouseclick not always registered in XNA Update method

    - by LordrAider
    I have a problem that not all inputs of my mouse events seem to be registered. The update logic is checking a 2 dimensional array of 10x10 . It's logic for a jewel matching game. So when i switch my jewel I can't click on another jewel for like half a second. I tested it with a click counter variable and it doesn't hit the debugger when i click the second time after the jewel switch. Only if I do the second click after waiting half a second longer. Could it be that the update logic is too heavy that while he is executing update logic my click is happening and he doesn't register it? What am I not seeing here :)? Or doing wrong. It is my first game. My function of the update methode looks like this. public void UpdateBoard() { MouseState currentMouseState; currentMouseState = Mouse.GetState(); if (currentMouseState.LeftButton == ButtonState.Pressed && prevMouseState.LeftButton != ButtonState.Pressed) { UpdatingLogic = true; // this.CheckDropJewels(currentMouseState); //this.CheckMatches(3); //this.RemoveMatches(); this.CheckForSwitch(currentMouseState); this.MarkJewel(currentMouseState); UpdatingLogic = false; //reIndexMissingJewels = true; reIndexSwitchedJewels = true; } prevMouseState = currentMouseState; this.ReIndex(); this.UpdateJewels(); }

    Read the article

  • Problems with input and suspend(separately)

    - by VaultPrisoner
    I'm still very new at Ubuntu/any other linux OS, so this might be an easy fix for most. I installed Ubuntu 14.04 a few days ago, it's dual-booted with Windows 8.1 and I'm using a Toshiba laptop. Whenever I suspend/close my computer, and then open it again, the front LEDs will light up telling me that it's running, but the screen will not do anything. I can hear the fan spinning, but it's almost like the screen doesn't power on like it should. Also, another problem I'm having, is that ~60% of the time, my trackpad and keyboard do not work at all, this starts as soon as Ubuntu boots up. To fix it, I just turn it off and back on until it works. My USB mouse does work no matter what though. Both problems are only present in Ubuntu and not windows 8.1.

    Read the article

  • Writing String to Stream and reading it back does not work

    - by Binary255
    I want to write a String to a Stream (a MemoryStream in this case) and read the bytes one by one. stringAsStream = new MemoryStream(); UnicodeEncoding uniEncoding = new UnicodeEncoding(); String message = "Message"; stringAsStream.Write(uniEncoding.GetBytes(message), 0, message.Length); Console.WriteLine("This:\t\t" + (char)uniEncoding.GetBytes(message)[0]); Console.WriteLine("Differs from:\t" + (char)stringAsStream.ReadByte()); The (undesired) result I get is: This: M Differs from: ? It looks like it's not being read correctly, as the first char of "Message" is 'M', which works when getting the bytes from the UnicodeEncoding instance but not when reading them back from the stream. What am I doing wrong? The bigger picture: I have an algorithm which will work on the bytes of a Stream, I'd like to be as general as possible and work with any Stream. I'd like to convert an ASCII-String into a MemoryStream, or maybe use another method to be able to work on the String as a Stream. The algorithm in question will work on the bytes of the Stream.

    Read the article

  • How to create a virtual input in windows from an audio stream

    - by Brian
    Great to find this forum full of knowledge. I was wondering if anyone knew of an application or other work around to create a virtual input device in windows. I have a IP cam app on my android phone, that I would like to use for skype webcam. It comes with a port for getting the video feed into skype, and that works great. However, the only audio available, is a OGG stream. Both video and audio work great with media players such as VLC etc, but ot with skype, since skype only works with windows input devices. Is there such software outthere, in which I could simply name my audio stream address, and pipe that to a virtual input device to allow skype to find it?

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • Implementing a robust async stream reader

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output, but that is a detail and does not play an important role. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • How to make gpg2 to flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 < /tmp/qqq You need a passphrase to unlock the secret key for user: "Vitaly Shukela (_Vi) " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

  • How to make gpg2 flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

  • User input and Automated input seperation

    - by tpaksu
    I have a mysql db and I have an automation script which modifies the data inside once a day. And, these columns may have changed by an user manually. What is the best approach to make the system only update the automated data, not the manually edited ones? I mean yes, flagging the cell which is manually edited is one way to do it, but I want to know if there's another way to accomplish this? Just curiosity.

    Read the article

  • User input and automated input separation

    - by tpaksu
    I have a MySQL database and an automation script which modifies the data inside once a day. And these columns may have changed by an user manually. What is the best approach to make the system only update the automated data, not the manually edited ones? I mean yes, flagging the cell which is manually edited is one way to do it, but I want to know if there's another way to accomplish this? Just curiosity. BTW, the question is about cell values, not rows.

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • ffmpeg, vlc - Unable to find input stream

    - by zozo
    Good day to all... I have some "little" problems with ffserver and ffmpeg... What I need to do is to broadcast a live video. So I got the cam... used vlc and used send stream option. I sent it to 192.168.1.9:64555, which is a virtual machine on the same computer, running centos. On the virtual machine I run the command ffmpeg -i 192.168.1.9:64555 output.mpg. The response is "unable to find file whatever". Can any1 tell me what I did wrong? Thank you and have a great day. Print-screen with error:

    Read the article

  • How to find other end of unix socket connection?

    - by depesz
    I have a process (dbus-daemon) which has many open connection over UNIX sockets. One of these connections is fd #36: =$ ps uw -p 23284 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND depesz 23284 0.0 0.0 24680 1772 ? Ss 15:25 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session =$ ls -l /proc/23284/fd/36 lrwx------ 1 depesz depesz 64 2011-03-28 15:32 /proc/23284/fd/36 -> socket:[1013410] =$ netstat -nxp | grep 1013410 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD =$ netstat -nxp | grep dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013953 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013825 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013726 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013471 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012325 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012302 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012289 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012151 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011957 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011937 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011900 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011775 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011771 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011769 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011766 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011663 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011635 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011627 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011540 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011480 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011349 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011312 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011284 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011250 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011231 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011155 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011061 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011049 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011035 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011013 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1010961 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1010945 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD Based on number connections, I assume that dbus-daemon is actually server. Which is OK. But how can I find which process is connected to it - using the connection that is 36th file handle in dbus-launcher? Tried lsof and even greps on /proc/net/unix but I can't figure out a way to find the client process.

    Read the article

  • Stream data with Node.js

    - by Saif Bechan
    I want to know if it is possible to stream data from the server to the client with Node.js. From what I can understand all over the internet is that this has to be possible, yet I fail to find a correct example or solution. What I want is a single http post to node.js with AJAX. Than leave the connection open and continuously stream data to the client. The client will receive this stream and update the page continuously.

    Read the article

  • Concepts: Channel vs. Stream

    - by hotzen
    Hello, is there a conceptual difference between the terms "Channel" and "Stream"? Do the terms require/determine, for example, the allowed number of concurrent Consumers or Producers? I'm currently developing a Channel/Stream of DataFlowVariables, which may be written by one producer and read by one consumer as the implementation is destructive/mutable. Would this be a Channel or Stream, is there any difference at all? Thanks

    Read the article

  • Does isEmpty method in Stream evaluate the whole Stream?

    - by abhin4v
    In Scala, does calling isEmtpy method on an instance of Stream class cause the stream to be evaluated completely? My code is like this: import Stream.cons private val odds: Stream[Int] = cons(3, odds.map(_ + 2)) private val primes: Stream[Int] = cons(2, odds filter isPrime) private def isPrime(n: Int): Boolean = n match { case 1 => false case 2 => true case 3 => true case 5 => true case 7 => true case x if n % 3 == 0 => false case x if n % 5 == 0 => false case x if n % 7 == 0 => false case x if (x + 1) % 6 == 0 || (x - 1) % 6 == 0 => true case x => primeDivisors(x) isEmpty } import Math.{sqrt, ceil} private def primeDivisors(n: Int) = primes takeWhile { _ <= ceil(sqrt(n))} filter {n % _ == 0 } So, does the call to isEmpty on the line case x => primeDivisors(x) isEmpty cause all the prime divisors to be evaluated or only the first one?

    Read the article

  • I get error when trying to write response stream to a file

    - by MemphisDeveloper
    I am trying to test a rest webservice but when I do a post and try to retreive the save the response stream to a file I get an exception saying "Stream was not readable." What am I doing wrong? Public Sub PostAndRead() Dim flReader As FileStream = New FileStream("~\testRequest.xml", FileMode.Open, FileAccess.Read) Dim flWriter As FileStream = New FileStream("~\testResponse.xml", FileMode.Create, FileAccess.Write) Dim address As Uri = New Uri(restAddress) Dim req As HttpWebRequest = DirectCast(WebRequest.Create(address), HttpWebRequest) req.Method = "POST" req.ContentLength = flReader.Length req.AllowWriteStreamBuffering = True Dim reqStream As Stream = req.GetRequestStream() ' Get data from upload file to inData Dim inData(flReader.Length) As Byte flReader.Read(inData, 0, flReader.Length) ' put data into request stream reqStream.Write(inData, 0, flReader.Length) flReader.Close() reqStream.Close() ' Post Response req.GetResponse() ' Save results in a file Copy(req.GetRequestStream(), flWriter) End Sub

    Read the article

  • FQL query using stream table doesn't accept app access token

    - by tougher
    I've searched stackoverflow all day long to find an answer without luck, so here we go. I'm trying to fetch data from the stream table like this: FQL: SELECT post_id, message, created_time FROM stream WHERE source_id = 131559313586863 URL: http:// graph.facebook.com/fql?q=SELECT+post_id%2C+message%2C+created_time+FROM+stream+WHERE+source_id+%3D+131559313586863&access_token=10669xxxxx74470|PF-7GSdBx0Nxxxxxkdi1KwSQG-w But I get a 400 Bad Request as response with the error message: "An access token is required to request this resource.". I'm fetching an application access token with this url: https:// graph.facebook.com/oauth/access_token?client_id=FACEBOOK_APP_ID&client_secret=FACEBOOK_APP_SECRET&grant_type=client_credentials Facebook state in this blog post that "You will need to pass a valid app or user access token to access this functionality.". Functionality refers to /feed and /posts (the stream table). Futhermore this wiki tells the same story about using the stream table, "From June 3 2011 a token is required to query this table. You can use any application or user token to make the query.". Does anyone see my hopefully obvious flaw? Please note: The profile in the FQL query is public. I need this to run userless though a cronjob. No user interaction is possible. The request works if I replace the app access token with my own user token from https:// developers.facebook.com/tools/explorer Sorry for breaking the URL's. I need more reputation to post more than 2 links :-/

    Read the article

  • how to deal with the position in a c# stream

    - by CapsicumDreams
    The (entire) documentation for the position property on a stream says: When overridden in a derived class, gets or sets the position within the current stream. The Position property does not keep track of the number of bytes from the stream that have been consumed, skipped, or both. That's it. OK, so we're fairly clear on what it doesn't tell us, but I'd really like to know what it in fact does stand for. What is 'the position' for? Why would we want to alter or read it? If we change it - what happens? In a pratical example, I have a a stream that periodically gets written to, and I have a thread that attempts to read from it (ideally ASAP). From reading many SO issues, I reset the position field to zero to start my reading. Once this is done: Does this affect where the writer to this stream is going to attempt to put the data? Do I need to keep track of the last write position myself? (ie if I set the position to zero to read, does the writer begin to overwrite everything from the first byte?) If so, do I need a semaphore/lock around this 'position' field (subclassing, perhaps?) due to my two threads accessing it? If I don't handle this property, does the writer just overflow the buffer? Perhaps I don't understand the Stream itself - I'm regarding it as a FIFO pipe: shove data in at one end, and suck it out at the other. If it's not like this, then do I have to keep copying the data past my last read (ie from position 0x84 on) back to the start of my buffer? I've seriously tried to research all of this for quite some time - but I'm new to .NET. Perhaps the Streams have a long, proud (undocumented) history that everyone else implicitly understands. But for a newcomer, it's like reading the manual to your car, and finding out: The accelerator pedal affects the volume of fuel and air sent to the fuel injectors. It does not affect the volume of the entertainment system, or the air pressure in any of the tires, if fitted. Technically true, but seriously, what we want to know is that if we mash it to the floor you go faster..

    Read the article

  • FMS: Stream.play on server side makes video choppy

    - by Tinelise
    I have an *.flv file on a FMS. When I play it on the client side the video plays just fine, but when I call Stream.play(filename, 0, -1, false) on the server side the video turns out really choppy. I both cases I use NetConnection to connect to an rtmp and NetStream to play the stream, but in one case I connect to a stream and request the server to play my file on that stream. Apparently that doesn't work with files? It works just fine for live streams. I really don't get why this should differ at all. Any suggestions?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >