Search Results

Search found 22521 results on 901 pages for 'output redirect'.

Page 126/901 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How come I can't redirect TCP ports on this wireless router?

    - by George Edison
    I am configuring a router to redirect TCP port 5900 (yes, this is for VNC) to a specific IP address on the network. Here is what I have: From a local computer on the same network, I can telnet to 192.168.1.64 (port 5900) just fine. However, when trying to telnet to the machine (port 5900) using its external IP address, it doesn't work. (The connection times out.) The router is a Gigaset SE567, if that helps.

    Read the article

  • How can I direct rsync output / log to the remote server?

    - by Guest
    I am able to output rsync logs on the client machine using --log-file=FILE but I want the output to be sent to the server instead. The client is a W7 machine (cygwin) and the server a Linux NAS. This is the command I use which successfully logs the file on the client. I'm looking to have the file sent to the server instead: rsync -PavOs --delete --log-file=/somepath/rsynclog.txt -e "ssh -i /somepath/keyfile -p 1000" "/somepath/User/" [email protected]:/somepath/User/ Thanks

    Read the article

  • How do you redirect from one directory to another and maintain the proper basepath?

    - by codeninja
    I need to redirect all /eula traffic to /tos RewriteRule ^/eula/$ /tos [R=301,NC] but this rule doesnt seem to work -- mostly because the basepath is being treated as the root when really theres another parent directory; what's happening with the above rule is /my/docs/eula -> /tos which is not right, it should be doing this /my/docs/eula -> /my/docs/tos How do I write the rule for this, without having to specify what the parent dir is?

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • How to track subdomains with Google Analytics while having mod_rewrite redirect to a subdomain?

    - by Marek
    When users come directly to domain.com or www.domain.com, I am redirecting them to shop.domain.com via this .htaccess rewrite: RewriteEngine on RewriteCond %{HTTP_HOST} ^www.domain.com$ [OR] RewriteCond %{HTTP_HOST} ^domain.com$ RewriteRule ^(.*)$ http://shop.domain.com/ [R=301,L] The content served by shop.domain.com has the following tracking code parameters: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-123456-6']); _gaq.push(['_setDomainName', '.domain.com']); _gaq.push(['_trackPageview']); All direct visits that come to shop.domain.com as a result of the rewrite from domain.com are tracked as referral traffic, showing my own domain.com as referral source in Google Amalytics. I would like to track these visits as direct traffic. How to change the configuration to track mod_rewritten traffic on my subdomain coming from my own domain as direct traffic?

    Read the article

  • redirection problem for my sites.

    - by redirect-p
    I have a site example.com and another one test.example.com. Both have different configuration file. But when I enter url test.example.com it will redirect to example.com. configuration file for example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DirectoryIndex index.html DocumentRoot my-document-path Options -Indexes ErrorDocument 404 /errors/404.html ErrorDocument 403 /errors/404.html <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython PythonPath "['path', 'path'] + sys.path" SetEnv DJANGO_SETTINGS_MODULE example.settings PythonInterpreter example PythonAutoReload On PythonDebug On </Location> </VirtualHost>

    Read the article

  • Can't shrink Windows Boot NTFS disk: ERROR(5): Could not map attribute 0x80 in inode, Input/output error

    - by arcyqwerty
    Ubuntu 12.04 LTS, all updates current as of 7/3/2012 gksudo gparted Shrink /dev/sda2 from 367GB to 307GB GParted 0.11.0 --enable-libparted-dmraid Libparted 2.3 Shrink /dev/sda2 from 367.00 GiB to 307.00 GiB 00:32:57 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 20,484,096 end: 790,142,975 size: 769,658,880 (367.00 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:53 ( SUCCESS ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) Checking for bad sectors ... Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Estimating smallest shrunken size supported ... File feature Last used at By inode $MFT : 389998 MB 0 Multi-Record : 394061 MB 386464 $MFTMirr : 314823 MB 1 Compressed : 394064 MB 1019521 Sparse : 330887 MB 752454 Ordinary : 393297 MB 706060 You might resize at 327949758464 bytes or 327950 MB (freeing 66116 MB). Please make a test run using both the -n and -s options before real resizing! shrink file system 00:32:04 ( ERROR ) run simulation 00:32:04 ( ERROR ) ntfsresize -P --force --force /dev/sda2 -s 329640837119 --no-action ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) New volume size : 329640829440 bytes (329641 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Needed relocations : 13300525 (54479 MB) Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... Updating $BadClust file ... Updating $Bitmap file ... ERROR(5): Could not map attribute 0x80 in inode 1667593: Input/output error ======================================== Windows has run chkdsk successfully (on boot) several times now

    Read the article

  • Ubuntu 11.10 - How can i stop self-feedback-loop, coming directly from my microphone to speaker?

    - by YumYumYum
    I have microphone audio, which comes instantly to my speaker. I am using default pulseaudio and alsa from the package. I have tried to setup: 1) PA /etc/pulse/default.pa /etc/asound.conf $ ls analog-input-aux.conf analog-input-fm.conf analog-input-mic.conf analog-input-tvtuner.conf analog-output-desktop-speaker.conf analog-output-mono.conf analog-input.conf analog-input-front-mic.conf analog-input-mic.conf.common analog-input-video.conf analog-output-headphones-2.conf analog-output-speaker.conf analog-input.conf.common analog-input-internal-mic.conf analog-input-mic-line.conf analog-output.conf analog-output-headphones.conf iec958-stereo-output.conf analog-input-dock-mic.conf analog-input-linein.conf analog-input-rear-mic.conf analog-output.conf.common analog-output-lfe-on-mono.conf 2) ALSA in lsmod to make sure no loopback modules are loaded etc but none is resolving it. And there are many less information available on this. Has anyone similar problem solution in Ubuntu 11.10? (this problem i have resolved in Ubuntu 11.04 by replacing the default pulseaudio version to latest source from git, but while trying the same in Ubuntu 11.10 does not worked). Any tips please?

    Read the article

  • URL Rewrite, ServerVariables, URL Parts, HTTP to HTTPS Redirect. Week 9

    - by OWScott
    Last week I gave an intro to URL Rewrite; covering the basics and giving a real world example.  This week I dive in deeper and cover ServerVariables, the parts that make up the URL and another real world example of redirecting HTTP to HTTPS. This is week 9 of a 52 week series on various web administration related tasks.  Past and future videos can be found here. For reference, in the video I mentioned the following two blog posts: Viewing ServerVariables For a Site Parts of the URL available to URL Rewrite

    Read the article

  • Opening a new Windows from ASP.NET code behind

    - by TATWORTH
    At http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx there is an excellent post on how to open a new windows from code behind. The purists may not like it but it helped solve a problem for a client's client. Here is an update for VS2010 users: using System; using System.Web; using System.Web.UI; /// <summary> /// Response Helper for opening popup windo from code behind. /// </summary> public static class ResponseHelper {   /// <summary>   /// Redirect to popup window   /// </summary>   /// <param name="response">The response.</param>   /// <param name="url">URL to open to</param>   /// <param name="target">Target of window _self or _blank</param>   /// <param name="windowFeatures">Features such as window bar</param>   /// <remarks>   ///     <list type="bullet">   ///         <item>   /// From http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx   /// </item>   /// <item>   /// Note: If you use it outside the context of a Page request, you can't redirect to a new window. The reason is the need to call the ResolveClientUrl method on Page, which I can't do if there is no Page. I could have just built my own version of that method, but it's more involved than you might think to do it right. So if you need to use this from an HttpHandler other than a Page, you are on your own.   /// </item>   ///         <item>   /// Beware of popup blockers.   /// </item>   /// <item>   /// Note: Obviously when you are redirecting to a new window, the current window will still be hanging around. Normally redirects abort the current request -- no further processing occurs. But for these redirects, processing continues, since we still have to serve the response for the current window (which also happens to contain the script to open the new window, so it is important that it completes).   /// </item>   /// <item>   /// Sample call Response.Redirect("popup.aspx", "_blank", "menubar=0,width=100,height=100");   /// </item>   ///     </list>   /// </remarks>   public static void Redirect(this HttpResponse response, string url, string target, string windowFeatures)   {     if ((String.IsNullOrEmpty(target) || target.Equals("_self", StringComparison.OrdinalIgnoreCase)) && String.IsNullOrEmpty(windowFeatures))     {       response.Redirect(url);     }     else     {       Page page = (Page)HttpContext.Current.Handler;       if (page == null)       {         throw new InvalidOperationException("Cannot redirect to new window outside Page context.");       }       url = page.ResolveClientUrl(url);       string script;       if (!String.IsNullOrEmpty(windowFeatures))       {         script = @"window.open(""{0}"", ""{1}"", ""{2}"");";       }       else       {         script = @"window.open(""{0}"", ""{1}"");";       }       script = String.Format(script, url, target, windowFeatures);       ScriptManager.RegisterStartupScript(page, typeof(Page), "Redirect", script, true);     }   } }

    Read the article

  • What packages are neccessary to have sound output from java applets?

    - by MvG
    I've got a very minimalistic setup of ubuntu precise, created using debootstrap. So please don't assume that any packages are installed just because they usually are. On that system, I'd like to play some sounds from a java applet. However, this always fails with the following error message: javax.sound.midi.MidiUnavailableException: Can not open line at com.sun.media.sound.SoftSynthesizer.open(SoftSynthesizer.java:1132) at com.sun.media.sound.SoftSynthesizer.open(SoftSynthesizer.java:1036) ... Caused by: java.lang.IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_SIGNED 44100.0 Hz, 16 bit, stereo, 4 bytes/frame, little-endian is supported. at javax.sound.sampled.AudioSystem.getLine(AudioSystem.java:476) at javax.sound.sampled.AudioSystem.getSourceDataLine(AudioSystem.java:604) at com.sun.media.sound.SoftSynthesizer.open(SoftSynthesizer.java:1066) ... 35 more As the messages mention a soft synthesizer, and pcm lines, I expect that the lack of some midi daemon is not the issue here. As far as I can tell, the alsa kernel modules are loaded, including snd_hda_intel, snd_pcm, snd_seq_midi among others. I've also included the alsa-base and alsa-utils packages in my installation. alsa-mixer looks good, using “HDA Intel PCH” as its default device. What other packages, configuration settings or daemon startups does java require to make its sound output work?

    Read the article

  • Packets marked by iptables only sent to the correct routing table sometimes

    - by cookiecaper
    I am trying to route packets generated by a specific user out over a VPN. I have this configuration: $ sudo iptables -S -t nat -P PREROUTING ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -A POSTROUTING -o tun0 -j MASQUERADE $ sudo iptables -S -t mangle -P PREROUTING ACCEPT -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -A OUTPUT -m owner --uid-owner guy -j MARK --set-xmark 0xb/0xffffffff $ sudo ip rule show 0: from all lookup local 32765: from all fwmark 0xb lookup 11 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table 11 10.8.0.5 dev tun0 proto kernel scope link src 10.8.0.6 10.8.0.6 dev tun0 scope link 10.8.0.1 via 10.8.0.5 dev tun0 0.0.0.0/1 via 10.8.0.5 dev tun0 $ sudo iptables -S -t raw -P PREROUTING ACCEPT -P OUTPUT ACCEPT -A OUTPUT -m owner --uid-owner guy -j TRACE -A OUTPUT -p tcp -m tcp --dport 80 -j TRACE It seems that some sites work fine and use the VPN, but others don't and fall back to the normal interface. This is bad. This is a packet trace that used VPN: Oct 27 00:24:28 agent kernel: [612979.976052] TRACE: raw:OUTPUT:rule:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976105] TRACE: raw:OUTPUT:policy:3 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976164] TRACE: mangle:OUTPUT:rule:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976210] TRACE: mangle:OUTPUT:policy:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976269] TRACE: nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976320] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976367] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=tun0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976414] TRACE: nat:POSTROUTING:rule:1 IN= OUT=tun0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb and this is one that didn't: Oct 27 00:22:41 agent kernel: [612873.662559] TRACE: raw:OUTPUT:rule:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662609] TRACE: raw:OUTPUT:policy:3 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662664] TRACE: mangle:OUTPUT:rule:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662709] TRACE: mangle:OUTPUT:policy:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662761] TRACE: nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662808] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662855] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb I have already tried "ip route flush cache", to no avail. I do not know why the first packet goes through the correct routing table, and the second doesn't. Both are marked. Once again, I do not want ALL packets system-wide to go through the VPN, I only want packets from a specific user (UID=999) to go through the VPN. I am testing ipchicken.com and walmart.com via links, from the same user, same shell. walmart.com appears to use the VPN; ipchicken.com does not. Any help appreciated. Will send 0.5 bitcoins to answerer who makes this fixed.

    Read the article

  • Error message when running "make" command: /usr/bin/ld: i386 architecture of input file is incompatible with i386:x86-64 output

    - by user784637
    I am unable to create a working executable file by running the make command in a tree previously built on an i386 machine. I'm getting an error message in the form of me@me-desktop:~$ make /usr/bin/ld: i386 architecture of input file `../.. /Lib/libProgram.a(something.o)' is incompatible with i386:x86-64 output I've been told and reassured that this program has been tested and successfully compiled on 64-bit Fedora. I'm running a 64-bit machine me@me-desktop:~$ uname -m x86_64 I'm running Ubuntu 10.04 me@me-desktop:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.3 LTS Release: 10.04 Codename: lucid I'm using g++ # me@me-desktop:~$ g++ --version g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3 Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I'm also using libtool # me@me-desktop:~$ libtool --version ltmain.sh (GNU libtool) 2.2.6b Written by Gordon Matzigkeit <[email protected]>, 1996 Any clues as to what is going wrong?

    Read the article

  • Duplicate domain names - .net and .com Create separate pages, or redirect?

    - by guisasso
    In a SEO point of view: This website has a good amount of traffic for a local business, but also ships some merchandise. While the .net domain (registered first) is associated with the local busines (google places, maps, etc...) the .com domain only redirects to the .net domain. Is it good, bad or okay to create a different page for the domain .com for example, that would be pretty simple, but would link to the 10 different categories of products that this company sells? I know links are good, so there's that, but what else is good, or bad? Thanks in advance!

    Read the article

  • alsa - sound issues on ubuntu 12.04

    - by tam_ubuuser
    i am having an sony E series laptop.i have an HDMI port .at this stage ,i have tested my sound card , which provides audio out on my laptop i.e i could hear songs .my laptop has two sound cards amd 5450 and an intel-hda(alsamixer shows that as s/pdif) . i decided to connect HDMI output to my new HD-TV.but, i could get only visuals on my TV,NO AUDIO OUTPUT ( HDMI cable works fine with win 7).my laptop has two sound cards.but i couldn't switch output to other card.( i don't know ,how to do that) i decided to update alsa. complied the following code in terminal. sudo apt-add-repository ppa:ubuntu-audio-dev/alsa-daily sudo apt-get update sudo apt-get install alsa-hda-dkms then,strangely no login sound, and no audio output on my laptop at all .then, started complied code from step1 sound troubleshooting procedure from offical ubuntu site.then, my speaker icon taskbar disappeared .obivously $aplay -l ,provided output as no soundcards detected . so , i implemented step 4 from that guide, it provides a output of all hardware devices in my laptop. *-multimedia UNCLAIMED description: Audio device product: Cedar HDMI Audio [Radeon HD 5400/6300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:f0040000-f0043fff *-multimedia UNCLAIMED description: Audio device product: 5 Series/3400 Series Chipset High Definition Audio vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f5e00000-f5e03fff that command displayed output name of the two cards . but , still i have no positive output on $aplay -l. so therfore, i think alsa couldn't detect my sound cards . is there solution to this problem? it could be better,if alsa would channel output from multiple sound cards ? how should install and configure alsa such that detects HDMI cable as soon i connect to my HD tv? is it possible to alsa and pluseaudio 2.0 to co-exist, if so how?

    Read the article

  • How to redirect from HTTPS to HTTP without warning message?

    - by user833985
    I have two web sites: one HTTP site and one HTTPS site. I will validate the credentials in HTTPS environment and will return to HTTP once authorized. The same is working fine in IE but in Mozilla im getting a warning which is given below. Although this page is encrypted, the information you have entered to be sent over an unencrypted connection and could easily be read by a thrid party. Are you sure you want to continue sending this information? How to overcome this warning message? Currently I'm posting from HTTPS aspx page using JavaScript to the HTTP page.

    Read the article

  • How to handle CNAME host redirect to virtual directory?

    - by esac
    I have an internal website and virtual directory http://server2012/logs. I created a CNAME on my DNS server as LOGS - server2012. I would like to set it up so that http://LOGS redirects to http://server2012/logs. Ideally, I would still want it so that all pages appear in the browser as being off from the LOGS URL. So http://LOGS/network.html?site=32 is what is displayed in the browser, but it is really being served from http://server2012/logs/network.html?site=32. I've looked at URL rewrite, but can't seem to get to work.

    Read the article

  • Upgrading Blog to WordPress, keep old one or redirect?

    - by Spazm
    I have had a blog for around 4 years now that has gained some success and gets tons of residual and organic traffic. I am using an outdated version of BlogEngine.NET and we are going to switch to WordPress. Currently, our blog is at ourwebsite.com/blog, and I don't really want to mess with our web server. I setup a new server just for WordPress and instead of dealing with proxies and things that are above my head, the new blog will be at blog.oursite.com. I am trying to figure out the best way to go about doing this. We value our search engine rankings very much so, as that is where 90% of our traffic comes from. Would I be best off: Importing all of the old blog posts from oursite.com/blog to blog.oursite.com and then redirecting all of the articles, categories, authors, pages and tags to the new blog and giving up the direct links to our current site. Keeping all of our current articles as they are, and only add new content to the new blog.oursite.com and just kind of let the old blog float around our site.

    Read the article

  • How do I forward/redirect a website from a folder in a subdomain to another server?

    - by dozza
    I have a client with a site at: subdomain.theirdomain.com/folder It's a 50Gb gallery site that i've now cloned and have locally in MAMP. Once i've made some changes to it I need to host it at alternative physical server/hosting (I intend using a dedicated server I have access to with a technical domain name currently). However, the client would ideally like to keep the existing URL as it has been used extensively in marketing. I've done HTTP redirects and forwards and 301 redirects in the past, but I'm not sure how or even if I can do what the client wants. How can I achieve this, possibly using .htaccess and DNS entries? Caveats: I can't have the site at a 3rd party domain and the client isn't able/allowed to register any additional domains:

    Read the article

  • PHP MINISERVER DOWNLOAD RESUME-ERROR! Resource id # 4

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $range = $input[12]; $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; if(isset($range)) { list($a, $range) = explode("=",$range); str_replace($range, "-", $range); $size2 = $size-1; $new_length = $size-$range; $output = "HTTP/1.1 206 Partial Content\r\n"; $output .= "Content-Length: $new_length\r\n"; $output .= "Content-Range: bytes $range$size2/$size\r\n"; } else { $size2=$size-1; $output .= "Content-Length: $new_length\r\n"; } $chunksize = 1*(1024*1024); $bytes_send = 0; $file = "a.mp3"; $filesize = filesize($file); if ($file = fopen($file, 'r')) { if(isset($range)) $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= "Content-Length: $filesize\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; fseek($file, $range); $download_rate = 1000; while(!feof($file) and (connection_status()==0)) { $var_stat = fread($file, round($download_rate *1024)); $output .= $var_stat;//echo($buffer); // is also possible flush(); sleep(1);//// decrease download speed } fclose($file); } /** $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); $file = $send['filename']; */ //@ob_end_clean(); // $output .= "Content-Transfer-Encoding: binary"; //$output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); hey guys i have create a miniwebserver downloader it can download files from your server, however i am unable to resume my download when i download the file i get Resource id # 4 and also i cant resume the download,i would like to know how i can monitor record the client output how much bandwidth he has downloaded perl has something like this put its hardcore if possible kindly provide me with some pointers thank you :)

    Read the article

  • PHP mini-server download resulme-error! Resource id # 4

    - by snikolov
    <?php $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $range = $input[12]; $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; if(isset($range)) { list($a, $range) = explode("=",$range); str_replace($range, "-", $range); $size2 = $size-1; $new_length = $size-$range; $output = "HTTP/1.1 206 Partial Content\r\n"; $output .= "Content-Length: $new_length\r\n"; $output .= "Content-Range: bytes $range$size2/$size\r\n"; } else { $size2=$size-1; $output .= "Content-Length: $new_length\r\n"; } $chunksize = 1*(1024*1024); $bytes_send = 0; $file = "a.mp3"; $filesize = filesize($file); if ($file = fopen($file, 'r')) { if(isset($range)) $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= "Content-Length: $filesize\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; fseek($file, $range); $download_rate = 1000; while(!feof($file) and (connection_status()==0)) { $var_stat = fread($file, round($download_rate *1024)); $output .= $var_stat;//echo($buffer); // is also possible flush(); sleep(1);//// decrease download speed } fclose($file); } /** $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=>$buffer,"filesize"=>$filesize,"filename"=>$filename); $file = $send['filename']; */ //@ob_end_clean(); // $output .= "Content-Transfer-Encoding: binary"; //$output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hey guys, I haved create a miniwebserver downloader. It can download files from your server. However, I am unable to resume my download when I download the file – I get Resource id # 4 – and I also can't resume the download. I would like to know how I can monitor and record the client output and how much bandwidth he has downloaded. Perl has something like this, but it's hardcore; if possible, kindly provide me with some pointers thank you :)

    Read the article

  • How do I redirect stdin/stdout when I have a sequence of commands in Bash?

    - by Tom
    I've currently got a Bash command being executed (via Python's subprocess::Popen) which is reading from stdin, doing something and outputing to stdout. Something along the lines of: pid = subprocess.Popen( ["-c", "cmd1 | cmd2"], stdin = subprocess.PIPE, stdout = subprocess.PIPE, shell =True ) output_data = pid.communicate( "input data\n" ) Now, what I want to do is to change that to execute another command in that same subshell that will alter the state before the next commands execute, so my shell command line will now (conceptually) be: cmd0; cmd1 | cmd2 Is there any way to have the input sent to cmd1 instead of cmd0 in this scenario? I'm assuming the output will include cmd0's output (which will be empty) followed by cmd2's output. cmd0 shouldn't actually read anything from stdin, does that make a difference in this situation? I know this is probably just a dumb way of doing this, I'm trying to patch in cmd0 without altering the other code too significantly. That said, I'm open to suggestions if there's a much cleaner way to approach this.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >