Search Results

Search found 129402 results on 5177 pages for 'http status code 500'.

Page 71/5177 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Do I need to have a proxy server to have HTTP over SSH?

    - by Johnes thomas
    I want to use HTTP over SSH since in my university most of the sites are blocked. I have my own server to which I can using SSH. What I'm doing right now is have a squid proxy run on the server on a particular port. Then connect using putty to my server via ssh and create a tunnel from a certain local port (which I will enter as proxy server in Firefox) to the squid server. So in putty the configuration is like this for the tunnel: source port:8080 destination:localhost:3128 I want to know is there any other way other than running the squid proxy on my server to tunnel the packets? Thanks.

    Read the article

  • What would cause intermittent DomainKey-Status: failures?

    - by wherestheph
    Every week, we send something like a small newsletter. Sometimes, the header in the newsletter says DomainKey-Status: bad (test mode) Sometimes it says DomainKey-Status: good (test mode) All the other headers in the email are the same (besides expected time and message id differences). None of the email server configuration has changed. What would cause this problem?

    Read the article

  • How do you change data from a qr code on a scanner [on hold]

    - by Malcolm Eaton
    I have a problem now with the QR bar codes on the Wheelie Bins we deliver. The scan was giving us the following .. RL0313550 Now due to some changes at the manufacturing plant they have had to add more data as follows. 1234567891,RL031550 We only need the "RL031550" can anyone let me know how to fix this. We use Intermec CN50 device with 2d Imager fitted, was hoping to fix this within the device settings.

    Read the article

  • What alternative is there to Nginx that supports http keep-alive between backends ?

    - by felace
    Hi. I recently asked a question about how to keep a backend connection persistent using Nginx, but found out it wasn't possible anyway, It is an HTTP/1.0 proxy without the ability for keep-alive requests yet. (As a result, backend connections are created and destroyed on every request.) It works all fine right now (since the connection between client and Nginx is kept alive and the result is simply the same), but I don't want to establish a new connection every single time a new request is received ,even if it's on a unix domain socket. So, what software (preferably open-source and not too tedious to configure) do you recommend to accomplish that such connections ?

    Read the article

  • Is it possible to host a website with Apache HTTP through a ZyXEL EQ-660R modem and a Netgear WGT624v3 wireless router?

    - by Vortico
    Essentially, I have a spare desktop computer I'd like to turn into a web server, but my modem and wireless router are very difficult to work with. I installed Apache HTTP and successfully hosted a test page which can be accessed anywhere on the LAN. However, I'm having trouble setting up the server to be accessed from my external IP address. I was supplied with a ZyXEL EQ-660R DSL modem by my ISP (CenturyLink) and bought a Netgear WGT624v3 wireless router in which to connect my laptop and spare desktop. ZyXEL's website is no help, and I don't think much of the problem is with the Netgear router. I've played with many settings and have tried to forward port 80 from the modem, but I've had no luck. Could someone direct me toward a solution or recommendations for more promising hardware? Or should I admit defeat and explore other hobbies? :)

    Read the article

  • What factors can affect performance of Http Server written in C-Sharp? [on hold]

    - by Yousaf
    I am having trouble in terms of handling huge databases. I have multiple clients like 100-300 (clients are basically servers with i.e windows sql). Each client may have 38 thousand rows/listing of data, each row has 10-12 fields. I cannot afford to have json files of each client and than handle them on main server, because of memory issue. What if i have http server written in c or c# installed on clients and they return 250 rows in each response to the main server. How the factors like speed, memory or other issues can effect us ? What exactly I am asking for ? In short words if a server writter in c-sharp sends 250 rows per request. What factors can effect the performance of server ? for example. Speed, processing, Operating system, Implementation of algorithm of server ? How these factors can really effect the performance on large scale?

    Read the article

  • How can I redirect HTTP(S) traffic to another gateway?

    - by PsyStyle
    I have a network like 192.168.0.0/15 with the default gateway set to 192.168.0.1. All the workstations of the network use this gateway for all kind of accesses to the Internet. Now I am testing a new Internet connection with another provider and for that I'm using a second gateway on the same subnet with 192.168.0.2 as IP address. I want to redirect only HTTP and HTTPS traffic to this second gateway keeping untouched the address of the default gateway set inside every workstation. How can I accomplish this task? What I have to change inside the first's gateway firewall configuration or routes? I tried with a DNAT like: DNAT loc:192.168.0.1 loc:192.168.0.2 tcp 80 but nothing worked. I use Shorewall for simplicity in configuration but I can understand even theorical answers which I will try to adapt to my case.

    Read the article

  • Downloading a file over HTTP the SSIS way

    This post shows you how to download files from a web site whilst really making the most of the SSIS objects that are available. There is no task to do this, so we have to use the Script Task and some simple VB.NET or C# (if you have SQL Server 2008) code. Very often I see suggestions about how to use the .NET class System.Net.WebClient and of course this works, you can code pretty much anything you like in .NET. Here I’d just like to raise the profile of an alternative. This approach uses the HTTP Connection Manager, one of the stock connection managers, so you can use configurations and property expressions in the same way you would for all other connections. Settings like the security details that you would want to make configurable already are, but if you take the .NET route you have to write quite a lot of code to manage those values via package variables. Using the connection manager we get all of that flexibility for free. The screenshot below illustrate some of the options we have. Using the HttpClientConnection class makes for much simpler code as well. I have demonstrated two methods, DownloadFile which just downloads a file to disk, and DownloadData which downloads the file and retains it in memory. In each case we show a message box to note the completion of the download. You can download a sample package below, but first the code: Imports System Imports System.IO Imports System.Text Imports System.Windows.Forms Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() ' Get the unmanaged connection object, from the connection manager called "HTTP Connection Manager" Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject) ' Download the file #1 ' Save the file from the connection manager to the local path specified Dim filename As String = "C:\Temp\Sample.txt" connection.DownloadFile(filename, True) ' Confirm file is there If File.Exists(filename) Then MessageBox.Show(String.Format("File {0} has been downloaded.", filename)) End If ' Download the file #2 ' Read the text file straight into memory Dim buffer As Byte() = connection.DownloadData() Dim data As String = Encoding.ASCII.GetString(buffer) ' Display the file contents MessageBox.Show(data) Dts.TaskResult = Dts.Results.Success End Sub End Class Sample Package HTTPDownload.dtsx (74KB)

    Read the article

  • Downloading a file over HTTP the SSIS way

    This post shows you how to download files from a web site whilst really making the most of the SSIS objects that are available. There is no task to do this, so we have to use the Script Task and some simple VB.NET or C# (if you have SQL Server 2008) code. Very often I see suggestions about how to use the .NET class System.Net.WebClient and of course this works, you can code pretty much anything you like in .NET. Here I’d just like to raise the profile of an alternative. This approach uses the HTTP Connection Manager, one of the stock connection managers, so you can use configurations and property expressions in the same way you would for all other connections. Settings like the security details that you would want to make configurable already are, but if you take the .NET route you have to write quite a lot of code to manage those values via package variables. Using the connection manager we get all of that flexibility for free. The screenshot below illustrate some of the options we have. Using the HttpClientConnection class makes for much simpler code as well. I have demonstrated two methods, DownloadFile which just downloads a file to disk, and DownloadData which downloads the file and retains it in memory. In each case we show a message box to note the completion of the download. You can download a sample package below, but first the code: Imports System Imports System.IO Imports System.Text Imports System.Windows.Forms Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() ' Get the unmanaged connection object, from the connection manager called "HTTP Connection Manager" Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject) ' Download the file #1 ' Save the file from the connection manager to the local path specified Dim filename As String = "C:\Temp\Sample.txt" connection.DownloadFile(filename, True) ' Confirm file is there If File.Exists(filename) Then MessageBox.Show(String.Format("File {0} has been downloaded.", filename)) End If ' Download the file #2 ' Read the text file straight into memory Dim buffer As Byte() = connection.DownloadData() Dim data As String = Encoding.ASCII.GetString(buffer) ' Display the file contents MessageBox.Show(data) Dts.TaskResult = Dts.Results.Success End Sub End Class Sample Package HTTPDownload.dtsx (74KB)

    Read the article

  • Why does apt-get fail to resolve the mirror?

    - by Jake Kubisiak
    I know this has been covered before, but I can't seem to resolve my issue. Here is my output. jake@KUBIE-SERVER:~$ sudo apt-get update Err http://us.archive.ubuntu.com precise InRelease Err http://us.archive.ubuntu.com precise-updates InRelease Err http://us.archive.ubuntu.com precise-backports InRelease Err http://security.ubuntu.com precise-security InRelease Err http://archive.canonical.com precise InRelease Err http://ppa.launchpad.net precise InRelease Err http://archive.canonical.com precise Release.gpg Temporary failure resolving 'archive.canonical.com' Err http://ppa.launchpad.net precise Release.gpg Temporary failure resolving 'ppa.launchpad.net' Err http://security.ubuntu.com precise-security Release.gpg Temporary failure resolving 'security.ubuntu.com' Err http://us.archive.ubuntu.com precise Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com precise-updates Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com precise-backports Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise- updates/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise- backports/InRelease W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/InRelease W: Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/InRelease W: Failed to fetch http://ppa.launchpad.net/webupd8team/java/ubuntu/dists/precise/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise- updates/Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise- backports/Release.gpg Temporary failure resolving 'us.archive.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise- security/Release.gpg Temporary failure resolving 'security.ubuntu.com' W: Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/Release.gpg Temporary failure resolving 'archive.canonical.com' W: Failed to fetch http://ppa.launchpad.net/webupd8team/java/ubuntu/dists/precise/Release.gpg Temporary failure resolving 'ppa.launchpad.net' W: Some index files failed to download. They have been ignored, or old ones used instead. Any help is greatly appreciated.

    Read the article

  • HDMI Sound Stops Working

    - by stueng
    When not used for a few hours the sound stops working on my HTPC. To get sound to work again I have to unplug the HDMI cable and plug it back in again When this sound "outage" occurs, the HDMI device dissapears from the sound config output devices I see the following in dmesg [78534.010328] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [78534.010363] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [78558.948403] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78558.948429] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [78562.579047] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=0 [78562.579078] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78562.579992] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=1 [78562.580043] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78562.876030] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78563.176036] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78563.195139] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [78563.195169] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [78563.476030] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [78635.529493] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78635.529524] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [78641.330701] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=0 [78641.330733] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78641.331649] HDMI hot plug event: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=1 [78641.331668] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78641.628050] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78641.928037] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78642.228042] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78642.528039] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1 [78642.828037] HDMI status: Codec=0 Pin=3 Presence_Detect=1 ELD_Valid=1

    Read the article

  • What techniques are used in solving code golf problems?

    - by Lord Torgamus
    "Regular" golf vs. code golf: Both are competitions. Both have a well-defined set of rules, which I'll leave out for simplicity. Both have well-defined goals; in short, "use fewer hits/characters than your competitors." To win matches, athletic golfers rely on equipment Some situations call for a sand wedge; others, a 9-iron. techniques The drive works better when your feet are about shoulder width apart and your arms are relaxed. and strategies Sure, you could take that direct shortcut to the hole... but do you really want to risk the water hazard or sand bunker when those trees are in the way and the wind is so strong? It might be better to go around the long way. What do code golfers have that's analagous to athletic golfers' equipment, techniques and strategies? Sample answer to get this started: use the right club! Choose GolfScript instead of C#.

    Read the article

  • So we've got a code review tool, now what can we use for software documents?

    - by Tini
    We're using Subversion as a full CM for code and also for related project documents. We have JIRA and Fisheye. When we wanted to add a peer review tool, we looked at and tested several candidates. Our weighted requirements included both code and document review, but ultimately, the integration with JIRA slanted the scores in Crucible's favor. Atlassian has slammed the door on ever supporting Word or PDF in Crucible. I've tested several workaround methods to make Crucible work for documents without success. (The Confluence/Crucible plug-in was deprecated by Atlassian, so that option is out, too.) I haven't found a plugin for Crucible that adds this functionality, so short of writing my own plug-in, Crucible for documents is unworkable. Word Track Changes doesn't provide a method for true collaboration and commenting. Adobe PDF Comment and Markup is interesting, but doesn't provide a great way to keep a permanent quality record of the conversation. We can't go cloud-based, our documents must be locally hosted on our own server only. We're only on Sharepoint 2007. Help! Anyone have a suggestion?

    Read the article

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

  • Introducing Code Map for Visual Studio 2012 September CTP

    - by krislankford
    As part of the Visual Studio 2012 CTP for September, Visual Studio got a little sexier at helping you discover and visualize your code. The introduction of the Code Map feature helps compliment the variety of other tools that are included with Visual Studio to help you analyze and visualize your projects and solutions. Code Map leverages the dgml format within Visual Studio that is currently used b the Architecture and Modeling tools. This is a nice addition that gets us from point A to point B a little faster. The great thing about Code Map is that you can gain access to the functionality from directly within your code from the context menu. This Code Map functionality is also context specific based on your cursor. You can evaluate and add items such as methods and variables directly to the Code Map window. As you add items the Code Map surface is updated to show your new item plus any relationships and dependencies that have been introduced in your code. Something that is also very nice is that the Code Map surface is interactive and allows you to use the F12 button (Go To Definition) which can help you navigate your code especially is you are adding items that span multiple files or projects. To get started all you have to do is go out and download the September CTP for Visual Studio 2012 located here. Happy Coding!   Code Map Window

    Read the article

  • How do I parse a header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you can give me some interesting viewpoints for my situation, because I am not satisfied with my current approach. I am writing an MP3 parser, starting with an ID3v2 parser. Right now I`m working on the extended header parsing, my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag. The 2.3 version optional header is defined as follows: struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication? Using two different functions to parse each version sounds less great, using a single function with a different flow for each occasion is similar, any good practices for this kind of issues ? Any tips for avoiding code duplication? Any help would be appreciated.

    Read the article

  • Parsing an header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you could give me some interesting viewpoints for my situation, my ways to approach my issue are not to my liking . I am writing an mp3 parser , starting with an ID3v2 parser . Right now I`m working on the extended header parsing , my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag . The 2.3 version optional header is defined as follows : struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication ? Using two different functions to parse each version sounds less great , using a single function with a different flow for each occasion is similar , any good practices for this kind of issues ? any tips for avoiding code duplication ? anything would be great .

    Read the article

  • can't install anything ,getting error "Sub-process /usr/bin/dpkg returned an error code (1)"

    - by soum
    i am getting error whenever tring to install or update anything. "Sub-process /usr/bin/dpkg returned an error code (1)" please help me i am just stopped with my ubuntu 11.10. no installation or update. th unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argumenttriggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argumenttriggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Software architecture for two similar classes which require different input parameters for the same method

    - by I Like to Code
    I am writing code to simulate a supply chain. The supply chain can be simulated in either an intermediate stocking or a cross-docking configuration. So, I wrote two simulator objects IstockSimulator and XdockSimulator. Since the two objects share certain behaviors (e.g. making shipments, demand arriving), I wrote an abstract simulator object AbstractSimulator which is a parent class of the two simulator objects. The abstract simulator object has a method runSimulation() which takes an input parameter of class SimulationParameters. Up till now, the simulation parameters only contains fields that are common to both simulator objects, such as randomSeed, simulationStartPeriod and simulationEndPeriod. However, I now want to include fields that are specific to the type of simulation that is being run, i.e. an IstockSimulationParameters class for an intermediate stocking simulation, and a XdockSimulationParameters class for a cross-docking simulation. My current idea is take the method runSimulation() out of the AbstractSimulator class, but to put a runSimulation(IstockSimulationParameters) method in the IstockSimulator class, and a runSimulation(XdockSimulationParameters) method in the IstockSimulator class. I am worried however, that this approach will lead to code duplication. What should I do?

    Read the article

  • Win32: What is the status of chunked encoding support in WinHttpReadData?

    - by Cheeso
    The documentation for WinHttpReadData says, regarding HTTP's chunked transfer coding: Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. When the Transfer-Encoding header is present on the WinHttp response, WinHttpReadData strips the chunking information before giving the data to the application. Can anyone decipher this? Q1 First, this text is on the page for WinHttpReadData, which is used to ... read data within an HTTP client application, specifically the response data. So what does it mean when it says Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. The WinHttpReadData function isn't used with data being sent to the server. It is used when reading data from the server. Consulting the doc for the WinHttpWriteData function, which is used to send data to the server as part of an HTTP request, there is no mention of the chunked transfer capability. Q2 Supposing that I figure out just what the newish chunked transfer support amounts to, how do I get that support? It says that it is new on Vista and WS2008. What happens if I write an app that runs on WS2003, and uses WinHttpReadData and it encounters a chunked response, or WinHttpWriteData, and it wants to send a chunked request? Between the lines, is this documentation saying that I need to link against the Vista-era Windows SDK, or later, in order to get the capability to do chunked encoding? Or is it really impossible on WS2003?, in other words it is the case that the app doing chunked transfer using this library must run on the OS specified? This might read like a rant, but it's not. I truly want to know.

    Read the article

  • Need some help deciphering a line of assembler code, from .NET JITted code

    - by Lasse V. Karlsen
    In a C# constructor, that ends up with a call to this(...), the actual call gets translated to this: 0000003d call dword ptr ds:[199B88E8h] What is the DS register contents here? I know it's the data-segment, but is this call through a VMT-table or similar? I doubt it though, since this(...) wouldn't be a call to a virtual method, just another constructor. I ask because the value at that location seems to be bad in some way, if I hit F11, trace into (Visual Studio 2008), on that call-instruction, the program crashes with an access violation. The code is deep inside a 3rd party control library, where, though I have the source code, I don't have the assemblies compiled with enough debug information that I can trace it through C# code, only through the disassembler, and then I have to match that back to the actual code. The C# code in question is this: public AxisRangeData(AxisRange range) : this(range, range.Axis) { } Reflector shows me this IL code: .maxstack 8 L_0000: ldarg.0 L_0001: ldarg.1 L_0002: ldarg.1 L_0003: callvirt instance class DevExpress.XtraCharts.AxisBase DevExpress.XtraCharts.AxisRange::get_Axis() L_0008: call instance void DevExpress.XtraCharts.Native.AxisRangeData::.ctor(class DevExpress.XtraCharts.ChartElement, class DevExpress.XtraCharts.AxisBase) L_000d: ret It's that last call there, to the other constructor of the same class, that fails. The debugger never surfaces inside the other method, it just crashes. The disassembly for the method after JITting is this: 00000000 push ebp 00000001 mov ebp,esp 00000003 sub esp,14h 00000006 mov dword ptr [ebp-4],ecx 00000009 mov dword ptr [ebp-8],edx 0000000c cmp dword ptr ds:[18890E24h],0 00000013 je 0000001A 00000015 call 61843511 0000001a mov eax,dword ptr [ebp-4] 0000001d mov dword ptr [ebp-0Ch],eax 00000020 mov eax,dword ptr [ebp-8] 00000023 mov dword ptr [ebp-10h],eax 00000026 mov ecx,dword ptr [ebp-8] 00000029 cmp dword ptr [ecx],ecx 0000002b call dword ptr ds:[1889D0DCh] // range.Axis 00000031 mov dword ptr [ebp-14h],eax 00000034 push dword ptr [ebp-14h] 00000037 mov edx,dword ptr [ebp-10h] 0000003a mov ecx,dword ptr [ebp-0Ch] 0000003d call dword ptr ds:[199B88E8h] // this(range, range.Axis)? 00000043 nop 00000044 mov esp,ebp 00000046 pop ebp 00000047 ret Basically what I'm asking is this: What the purpose of the ds:[ADDR] indirection here? VMT-table is only for virtual isn't it? and this is constructor Could the constructor have yet to be JITted, which could mean that the call would actually call through a JIT shim? I'm afraid I'm in deep water here, so anything might and could help. Edit: Well, the problem just got worse, or better, or whatever. We are developing the .NET feature in a C# project in a Visual Studio 2008 solution, and debugging and developing through Visual Studio. However, in the end, this code will be loaded into a .NET runtime hosted by a Win32 Delphi application. In order to facilitate easy experimentation of such features, we can also configure the Visual Studio project/solution/debugger to copy the produced dll's to the Delphi app's directory, and then execute the Delphi app, through the Visual Studio debugger. Turns out, the problem goes away if I run the program outside of the debugger, but during debugging, it crops up, every time. Not sure that helps, but since the code isn't slated for production release for another 6 months or so, then it takes some of the pressure off of it for the test release that we have soon. I'll dive into the memory parts later, but probably not until over the weekend, and post a followup.

    Read the article

  • Code refactoring with Visual Studio 2010 Part-4

    - by Jalpesh P. Vadgama
    I have been writing few post with code refactoring features in Visual Studio 2010. This post also will be part of series and this post will be last of the series. In this post I am going explain two features 1) Encapsulate Field and 2) Extract Interface. Let’s explore both features in details. Encapsulate Field: This is a nice code refactoring feature provides by Visual Studio 2010. With help of this feature we can create properties from the existing private field of the class. Let’s take a simple example of Customer Class. In that I there are two private field called firstName and lastName. Below is the code for the class. public class Customer { private string firstName; private string lastName; public string Address { get; set; } public string City { get; set; } } Now lets encapsulate first field firstName with Encapsulate feature. So first select that field and goto refactor menu in Visual Studio 2010 and click on Encapsulate Field. Once you click that a dialog box will appear like following. Now once you click OK a preview dialog box will open as we have selected preview reference changes. I think its a good options to check that option to preview code that is being changed by IDE itself. Dialog will look like following. Once you click apply it create a new property called FirstName. Same way I have done for the lastName and now my customer class code look like following. public class Customer { private string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } private string lastName; public string LastName { get { return lastName; } set { lastName = value; } } public string Address { get; set; } public string City { get; set; } } So you can see that its very easy to create properties with existing fields and you don’t have to change anything there in code it will change all the stuff itself. Extract Interface: When you are writing software prototype and You don’t know the future implementation of that then its a good practice to use interface there. I am going to explain here that How we can extract interface from the existing code without writing a single line of code with the help of code refactoring feature of Visual Studio 2010. For that I have create a Simple Repository class called CustomerRepository with three methods like following. public class CustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } In above class there are three method Add,Update and Delete where we are going to implement some code for each one. Now I want to create a interface which I can use for my other entities in project. So let’s create a interface from the above class with the help of Visual Studio 2010. So first select class and goto refactor menu and click Extract Interface. It will open up dialog box like following. Here I have selected all the method for interface and Once I click OK then it will create a new file called ICustomerRespository where it has created a interface. Just like following. Here is a code for that interface. using System; namespace CodeRefractoring { interface ICustomerRespository { void Add(); void Delete(); void Update(); } } Now let's see the code for the our class. It will also changed like following to implement the interface. public class CustomerRespository : ICustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } Isn't that great we have created a interface and implemented it without writing a single line of code. Hope you liked it. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • remove duplicate source entry [closed]

    - by yosa
    Possible Duplicate: Duplicate sources.list entry but cannot find the duplicates? This is my source.list and seems fine to me # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 11.10]/ natty main restricted # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release i386 (20110427.1)]/ natty main restricted # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ dists/oneiric/main/binary-i386/ # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ oneiric main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb-src http://ma.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb http://archive.ubuntu.com/ubuntu precise-updates restricted main multiverse universe deb http://security.ubuntu.com/ubuntu/ precise-security restricted main multiverse universe deb http://archive.ubuntu.com/ubuntu precise main universe deb-src http://extras.ubuntu.com/ubuntu precise main # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb-src http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb-src http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://packages.dotdeb.org stable all # deb-src http://packages.dotdeb.org stable all # deb http://ppa.launchpad.net/bean123ch/burg/ubuntu lucid main # deb-src http://ppa.launchpad.net/bean123ch/burg/ubuntu lucid main this is the error given by apt-get update which stops at 64% reading W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/main amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_main_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/universe amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/restricted amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/restricted i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-i386_Packages)

    Read the article

  • Instructions on how to configure a WebLogic Cluster and use it with Oracle Http Server

    - by Laurent Goldsztejn
    On October 17th I delivered a webcast on WebLogic Clustering that included a demo with Apache as the proxy server.  I realized that many steps are needed to set up the configuration I used during the demo.  The purpose of this article is to go through these steps to show how quickly and easily one can define a new cluster and then proxy requests via an Oracle Http Server (OHS). The domain configuration wizard offers the option to create a cluster.  The administration console or WLST, the Weblogic scripting tool can also be used to define a new cluster.  It can be created at any time but the servers that will participate in it cannot be in a running state. Cluster Creation using the configuration wizard Network and architecture requirements need to be considered while choosing between unicast and multicast. Multicast Vs. Unicast with WebLogic Clustering is of great help to make the best decision between the two messaging modes.  In addition, Configure Cluster offers details on each single field displayed above. After this initial configuration page, individual servers could be assigned to this newly created cluster although servers can be added later to the cluster.  What is not recommended is for the Admin server to participate in a cluster as the main purpose of the Admin server is to perform the bulk of the processing for the domain.  Servers need to stop before being assigned to a cluster.  There is also no minimum number of servers that have to participate in the cluster. At this point the configuration should be done and the cluster created successfully.  This can easily be verified from the console. Each clustered managed server can be launched to join the cluster.   At startup the following messages should be logged for each clustered managed server: <Notice> <WeblogicServer> <BEA-000365> <Server state changed to STARTING> <Notice> <Cluster> <BEA-000197> <Listening for announcements from cluster using messaging_mode cluster messaging> <Notice> <Cluster> <BEA-000133> <Waiting to synchronize with other running members of cluster_name>  It's time to try sending requests to the cluster and we will do this with the help of Oracle Http Server to play the role of a proxy server to demonstrate load balancing.  Proxy Server configuration  The first step is to download Weblogic Server Web Server Plugin that will enhance the web server by handling requests aimed at being sent to the Weblogic cluster.  For our test Oracle Http Server (OHS) will be used.  However plug-ins are also available for Apache Http server, Microsoft Internet Information Server (IIS), Oracle iPlanet Webserver or even WebLogic Server with the HttpClusterServlet. Once OHS is installed on the system, the configuration file, mod_wl_ohs.conf, will need to be altered to include Weblogic proxy specifics. First of all, add the following directive to instruct Apache to load the Weblogic shared object module extracted from the plugins file just downloaded. LoadModule weblogic_module modules/mod_wl_ohs.so and then create an IfModule directive to encapsulate the following location block so that proxy will be enabled by path (each request including /wls will be directed directly to the WebLogic Cluster).  You could also proxy requests by MIME type using MatchExpression in the Location block. <IfModule weblogic_module> <Location /wls>    SetHandler weblogic-handler    PathTrim /wls    WebLogicCluster MS1_URL:port,MS2_URL:port    Debug ON    WLLogFile        c:/tmp/global_proxy.log     WLTempDir        "c:/myTemp"    DebugConfigInfo  On </Location> </IfModule> SetHandler specifies the handler for the plug-in module  PathTrim will instruct the plug-in to trim /w ls from the URL before forwarding the request to the cluster. The list of WebLogic Servers defined in WeblogicCluster could contain a mixed set of clustered and single servers.  However, the dynamic list returned for this parameter will only contain valid clustered servers and may contain more servers if not all clustered servers are listed in WeblogicCluster. Testing proxy and load balancing It's time to start OHS web server which should at this point be configured correctly to proxy requests to the clustered servers.  By default round-robin is the load balancing strategy set by WebLogic. Testing the load balancing can be easily done by disabling cookies on your browser given that a request containing a cookie attempts to connect to the primary server. If that attempt fails, the plug-in attempts to make a connection to the next available server in the list in a round-robin fashion.  With cookies enabled, you could use two different browsers to test the load balancing with a JSP page that contains the following: <%@ page contentType="text/html; charset=iso-8859-1" language="java"  %>  <%  String path = request.getContextPath();   String getProtocol=request.getScheme();   String getDomain=request.getServerName();   String getPort=Integer.toString(request.getLocalPort());   String getPath = getProtocol+"://"+getDomain+":"+getPort+path+"/"; %> <html> <body> Receiving Server <%=getPath%> </body> </html>  Assuming that you name the JSP page Test.jsp and the webapp that contains it TestApp, your browsers should open the following URL: http://localhost/wls/TestApp/Test.jsp  Each browser should connect to a different clustered server and this simple JSP should confirm that.  The webapp that contains the JSP needs to be deployed to the cluster. You can also verify that the load is correctly balanced by looking at the proxy log file.  Each request generates a set of log entries that starts with : timestamp ================New Request: Each request is associated with a primary server and a secondary server if one is available.  For our test request, the following entries should appear in the log as well:Using Uri /wls/TestApp/Test.jsp After trimming path: '/TestApp/Test.jsp' The final request string is '/TestApp/Test.jsp' If an exception occurs, it should also be logged in the proxy log file with the prefix:timestamp *******Exception type   WeblogicBridgeConfig DebugConfigInfo enables runtime statistics and the production of configuration information.  For security purposes, this parameter should be turned off in production. http://webserver_host:port/path/xyz.jsp?__WebLogicBridgeConfig will display a proxy bridge page detailing the plugin configuration followed by runtime statistics which could help in diagnosing issues along with the analyzing of the proxy log file.  In our example the url would be: http://localhost/wls/TestApp/Test.jsp?__WebLogicBridgeConfig  Here is how the top section of the screen can look like: The bottom part of the page contains runtime statistics, here is a snippet of it (unrelated with the previous JSP example).   This entire plugin configuration should be very similar with other web servers, what varies is the name of the proxy server configuration file. So, as you can see, it only takes a few minutes to configure a Weblogic cluster and get servers to join it. 

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >