Search Results

Search found 39224 results on 1569 pages for 'content delivery network'.

Page 221/1569 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • design a large scale network for an organization

    - by Essam
    hello.i am so new to networking i want to design a large scale network for an organization with HQ and two branches. i want to use class A address for that.my questions are: if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that(number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (lets say it is a university with two branches in two different states)?! that is all your help is highly appropriated.

    Read the article

  • How can I programmatically find the IP address/netmask/gateway configured for a specific network dev

    - by Oren S
    Hi I would like to write a piece of code which checks, for each network device (e.g. eth0, lo, master devices) some statistics and configuration data about that device. I could find the statistics data (and most of the configuration data) in /sys/class/net/..., however, I couldn't find any C/C++ API or any entry in procfs/sysfs listing the inet addr, netmask and gateway. Some alternatives I checked: parsing the output from ifconfig/route/some other utilities: I don't want to start a subprocess every time I need to do the inspection. parsing /etc/sysconfig/network-scripts/: will give me only the start-up configuration, and not the current state. Also, since this code is intended for a product in my workplace, where every external library is inspected thoroughly (meaning it will take me forever to add any external library) I prefer solutions which rely on Linux native API and not external libraries. Thanks!

    Read the article

  • Deployment of broadband network

    - by sthustfo
    Hi all, My query is related to broadband network deployment. I have a DSL modem connection provided by my operator. Now the DSL modem has a built-in NAT and DHCP server, hence it allocates IP addresses to any client devices (laptops, PC, mobile) that connect to it. However, the DSL modem also gets a public IP address X that is provisioned by the operator. My question is Whether this IP address X provisioned by operator is an IP address that is directly on the public Internet? Is it likely (practical scenario) that my broadband operator will put in one more NAT+DHCP server and provide IP addresses to all the modems within his broadband network. In this case, the IP addresses allotted to the modem devices will not be directly on the public Internet. Thanks in advance.

    Read the article

  • Virtual camera/direct show filter for network stream

    - by Jeje
    Hi guys, i'm working with Flash Live Encoder. It's using camera for streaming video. Support forum say's that i can create custom direct show filter and stream data that i need. I cann't understand how direct show filter will display in the source list of the live encoder. I've tryed to use some commercial virtual camera and it work's fine, but it cann't use source from network stream. Summary. I have a several network streams. I think that i must to create virtual camera for each one. But if i find examples with direct show filter on C#, i cann't find for virtual camera.

    Read the article

  • How can I set the default value for a custom "Number" field in SharePoint?

    - by UnhipGlint
    I created a custom field for a content type I am creating using the XML below. <field ID="{GUID}" Required="False" DisplayName="Likes" Name="Likes" Type="Number" SourceID="http://schemas.microsoft.com/sharepoint/v3"><default>0</default></field> The field is meant to be used as a counter of sorts, and will be incremented programmatically. But, I can't get the value to default to "0" when a new item is created. However, for some reason, when I create a new column manually using the Site Collection settings page and configure it to default to "0" it works as it should. So far, I've tried the following tactics: I removed the "default" element from the field definition, and set the "DefaultValue" attribute on the content type definition. I exported a definition for the manually-created, working column (using an Imtech STSADM tool). Then, I added it to my field definitions XML and modified the IDs so that I could add it to my content type. When I did this, it still didn't work, even though it was exported from a working column! Any idea why this isn't working for me?

    Read the article

  • Applescript, how to get network address of a file

    - by CRP
    We are a network of Mac computers. I would like to send email addresses to colleagues with links to files on network locations. I made the following applescript: tell application "Finder" set uuu to URL of the first item of (get the selection) set the clipboard to uuu end tell which puts the URL of the currently selected file into the clipboard, which can then be pasted into the message (using the Add Link menu item), providing, for example: file://localhost/Volumes/Commerciale/Clienti/ unfortunately these links do not work. If I select Go To Folder from the menu item, I can get to the folder using an afp:// type url. Is there any way to get this via applescript like I do with url above? Thanks

    Read the article

  • Ubuntu problem not connecting to wireless or wired network

    - by ToughPal
    I recently installed openvpn and things were working. But I got a weird screen after a few hours and on restart my wired and wireless connections are not working. Can someone help? cat /etc/network/interfaces auto lo iface lo inet loopback cat /etc/resolv.conf #Generated by NetworkManager Is there anything missing? I tried both wired and wireless and both are not working. Usually if I ever have a problem with wireless, the wired always work! My /etc/network/interfaces is looking wrong. Can you please send me yours? I am using ubuntu 9.10 and the internet was working correctly until today! Please help

    Read the article

  • How can I enable/disable network connection options programmatically

    - by nikie
    When I open the properties on a network connnection on windows, I see this dialog: In this dialog, in the check-listbox I can enable or disable options like "File or printer sharing", "client for microsoft networks" or network filter drivers. My question is: How can I enable/disable these options programatically? I didn't find anything that looks like this in the WMI documentation and I couldn't find any other Win32 API for this. I would prefer a C Win32 API or WMI interface, but a solution using any programming language is welcome. The question is language-agnostic.

    Read the article

  • Reference book on Ruby on Rails social network site for novices

    - by Christopher
    Hey, so I am a novice programmer who has only worked with HTML, CSS, and a touch of Java a couple of years ago. I am looking to try to get a social network site up and running on Ruby on Rails and was looking at RailsSpace: Building a Social Networking Website with Ruby on Rails (Addison-Wesley Professional Ruby Series) - Paperback (July 30, 2007) by Michael Hartl. This was published in 2007, however, and the newer reviews say it is not very helpful. Does anyone know if there are any new publications out there that walk you through building a social network site while teaching you the code(rather than just using a out of the box program)? Thanks

    Read the article

  • Run exe on computer in network through batch file

    - by Peter Kottas
    I have few computers connected to the network and I want to create batch files to automatize the process of working with them. I have already created one used to shutdown computers at once. It is very simple I ll just post it for the sake of argument. @echo off shutdown -s -m \\Slave1-PC shutdown -s -m \\Slave2-PC shutdown -s -m \\Slave3-PC Now I want to execute programs on these machines. So lets say there is "example.exe" file located "\Slave1-PC\d\example.exe" using call \\Slave1-PC\\d\\example.exe runs it on my computer through network and i didn't come up with anything else. I dont want to use any psexec if possible. Help would be much appreciated. Peter

    Read the article

  • Designing a peer to peer network

    - by Varun
    I am designing a simple peer to peer network. The basic architecture will be as follows: A central server- that keeps track of all the peers. The job of this server is to keep track of all the peers that join the network. Every peer could do things: a. Download a file from it's peer b. Push a file (send a file) to it's peer. Could anybody please tell me what would be the best design for such a system? What would be the problems that i might run into and so on. I am planning to use Java as the programming language to implement. Would it be a good choice? Also, is it necessary that i would need a Linux box to develop the system? or is it fine if i use a Windows machine? Your help will be much appreciated! Thanks!

    Read the article

  • AIO network sockets and zero-copy under Linux

    - by remyhorton
    I have been experimenting with async Linux network sockets (aio_read et al in aio.h/librt), and one thing i have been trying to find out is whether these are zero-copy or not. Pretty much all i have read so far discusses file I/O, whereas its network I/O i am interested in. AIO is a bit of a pain to use and i suspect is non-portable, so wondering whether its worth persevering with it. Zero-copy is just about the only advantage (albiet a major one for my purposes) it would have over (non-blocking) select/epoll..

    Read the article

  • Transfer of directory structure over network

    - by singh
    I am designing a remote CD/DVD burner to address hardware constraints on my machine. My design works like this: (analogous to a network printer) Unix-based machine (acts as server) hosts a burner. Windows-based machine acts as client. Client prepares data to burn and transfers it to the server. Server burns the data on CD/DVD. My question is: what is the best protocol to transfer data over the network (Keeping the same directory hierarchy) between different operating systems?

    Read the article

  • Ping or otherwise tell if a device is on the network by MAC in C#

    - by Aric TenEyck
    I'm developing a home security application. One thing I'd like to do is automatically turn it off and on based on whether or not I'm at home. I have a phone with Wifi that automatically connects to my network when I'm home. The phone connects and gets its address via DHCP. While I could configure it to use a static IP, I'd rather not. Is there any kind of 'Ping' or equivalent in C# / .Net that can take the MAC address of a device and tell me whether or not it's currently active on the network?

    Read the article

  • Capturing network traffic on Linux

    - by Quandary
    Question: I have one Windows laptop, one Linux laptop and a wireless router. Now I want to "investigate" the hotmail/windows live protocol. What I want to do is route network traffic from the windows laptop via ethernet to the linux laptop, capture it on the Linux computer, forward it wirelessly to the router, receive the hotmail response from the router on the linux computer and forward it to the windows computer. How do I do that? In essence, switching the Linux laptop between the Windows laptop and the router, to capture network traffic ? Which program is best for capturing/analysing ? Please note that for whatever reason, packet capturing with winpcap on the windows computer doesn't work...

    Read the article

  • PHP: Mapped Network Drives

    - by Abs
    Hello all, I have mapped a network drive to a computer in my home network. Now I am trying to access it via PHP - I did this quick test: echo opendir('Z:\\'); This gives me: Warning: opendir(Z:\) [function.opendir]: failed to open dir: No error in C:\wamp\www\webs\tester-function.php on line 3 What have I done wrong here? I don't want my users typing in the UNC path so is there a way to get the UNC path for them and maybe that will work when I try to access it? This is possible in Microsoft languages but I am not sure how to get PHP to do this - maybe using a cmd.exe command? Please note, the mapped drive does exist as I can see it and I can access it. It also does not appear to be a permissions problem as I am assuming it would of complained about this IF it could access that drive...right? Thanks all for any help

    Read the article

  • How to format dates in Jahia 6 CMS?

    - by dpb
    I am helping a friend of mine put up a site for his business. I’ve read different posts and sites trying to find the ideal CMS tool, but people have different views of what is the best, so I finally just picked one of them at random. So I went for an evaluation of Jahia 6.0-CE. As you’ve probably guessed by now, I don’t have so much experience with CMS tools. I just want to setup the CMS, write the templates for the site and let my friend manage the content from there on. So I extracted the sources from SVN and went for a test drive. I managed to create some simple templates to get a hang of things but now I have an issue with a date format. In my definitions.cnd I declared the field like so: date myDateField (datetimepicker[format='dd.MM.yyyy']) This is formatted in the page and the selector also presents this in the dd.MM.yyyy format when inserting the content. But how about sites in other countries, countries that represent the date as MM.dd.yyyy for example? If I specify the format in the CND, hard coded, how can I change this later on so that it adapts based on the browser’s language? Do I extract the content from the repository and format it by hand in the JSP template based on a Locale, or is there a better way? Thank you.

    Read the article

  • jquery tabs - load url in current tab?

    - by BigDogsBarking
    I'm trying to figure out how to load the url each tab links to inside the tab area onclick, and have been trying to following the docs at http://docs.jquery.com/UI/Tabs#...open_links_in_the_current_tab_instead_of_leaving_the_page, but am clearly not getting it.... This is the HTML markup: <div class="tabs"> <ul class="tabNav"> <li><a href="/1.html#tabone">Tab One</a></li> <li><a href="/2.html#tabtwo">Tab Two</a></li> <li><a href="/3.html#tabthree">Tab Three</a></li> </ul> </div> <div id="tabone"> <!-- Trying to load content from 1.html in this div on click --> </div> <div id="tabtwo"> <!-- Trying to load content from 2.html in this div on click --> </div> <div id="tabthree"> <!-- Trying to load content from 3.html in this div on click --> </div> And this is the jquery I'm trying to use: $(".tabs").tabs({ load: function(event, ui) { $('a', ui.panel).click(function() { $(ui.panel).load(this.href); return false; }); } }); I know I've got some part of this wrong.... I've gone through several iterations (too many to post), and all I get is a blank div... I don't know... Feeling a bit confused here... Help?

    Read the article

  • UITableViewController executes delate functions before network request finishes

    - by user1543132
    I'm having trouble trying to populate a UITableView with the results of a network request. It seems that my code is alright as it works perfectly when my network is speedy, however, when it's not, the function - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath- still executes, which results in a bad access error. I presume that this is because the array that the aforesaid function attempts to utilize has not been populated. This brings me to my question: Is there anyway that I can have the UITableView delegate methods delayed to avoid this? - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"AlbumsCell"; //UITableViewCell *basicCell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier forIndexPath:indexPath]; AlbumsCell *cell = (AlbumsCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (!cell) { **// Here is where the Thread 1: EXC_BAD_ACCESS (code=2 address=0x8)** cell = [[[AlbumsCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } Album *album = [_albums objectAtIndex:[indexPath row]]; [cell setAlbum:album]; return cell; }

    Read the article

  • Network license control for a Java application

    - by user1461615
    I have been tasked with providing some form of network license control for a Java application. The app would be stored on a network drive and run from a client machine. The basic idea is that it will be able to work out how many times it is being run concurrently and prevent the N+1th user from running the software where N is the number of concurrent licenses the customer has purchased. Is this possible somehow with a Java application? I implemented a "solution" which relied on multi-cast UDP communication between the running instances of the application but this didn't work because on most networks this kind of communication is blocked by security measures. Is there a better way? I don't even mind if it requires JNI/JNA. N.B. The solution does not have to be that sophisticated or highly secure.

    Read the article

  • .net File.Copy very slow when copying many small files (not over network)

    - by Guavaman
    I'm making a simple folder sync backup tool for myself and ran into quite a roadblock using File.Copy. Doing tests copying a folder of ~44,000 small files (Windows mail folders) to another drive in my system, I found that using File.Copy was over 3x slower than using a command line and running xcopy to copy the same files/folders. My C# version takes over 16+ minutes to copy the files, whereas xcopy takes only 5 minutes. I've tried searching for help on this topic, but all I find is people complaining about slow file copying of large files over a network. This is neither a large file problem nor a network copying problem. I found an interesting article about a better File.Copy replacement, but the code as posted has some errors which causes problems with the stack and I am nowhere near knowledgeable enough to fix the problems in his code. Are there any common or easy ways to replace File.Copy with something more speedy?

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • [python] voice communication for python help!

    - by Eric
    Hello! I'm currently trying to write a voicechat program in python. All tips/trick is welcome to do this. So far I found pyAudio to be a wrapper of PortAudio. So I played around with that and got an input stream from my microphone to be played back to my speakers. Only RAW of course. But I can't send RAW-data over the netowrk (due the size duh), so I'm looking for a way to encode it. And I searched around the 'net and stumbled over this speex-wrapper for python. It seems to good to be true, and believe me, it was. You see in pyAudio you can set the size of the chunks you want to take from your input audiobuffer, and in that sample code on the link, it's set to 320. Then when it's encoded, its like ~40 bytes of data per chunk, which is fairly acceptable I guess. And now for the problem. I start a sample program which just takes the input stream, encodes the chunks, decodes them and play them (not sending over the network due testing). If I just let my computer idle and run this program it works great, but as soon as I do something, i.e start Firefox or something, the audio input buffer gets all clogged up! It just grows and then it all crashes and gives me an overflow error on the buffer.. OK, so why am I just taking 320 bytes of the stream? I could just take like 1024 bytes or something and that will easy the pressure on the buffer. BUT. If I give speex 1024 bytes of data to encode/decode, it either crashes and says that thats too big for its buffer. OR it encodes/decodes it, but the sound is very noisy and "choppy" as if it only encoded a tiny bit of that 1024 chunk and the rest is static noise. So the sound sounds like a helicopter, lol. I did some research and it seems that speex only can convert 320 bytes of data at time, and well, 640 for wide-band. But that's the standard? How can I fix this problem? How should I construct my program to work with speex? I could use a middle-buffer tho that takes all available data to read from the buffer, then chunk this up in 320 bits and encode/decode them. But this takes a bit longer time and seems like a very bad solution of the problem.. Because as far as I know, there's no other encoder for python that encodes the audio so it can be sent over the network in acceptable small packages, or? I've been googling for three days now. Also there is this pyMedia library, I don't know if its good to convert to mp3/ogg for this kind of software. Thank in in advance for reading this, hope anyone can help me! (:

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >