Search Results

Search found 21004 results on 841 pages for 'assembly load'.

Page 105/841 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • dialog display automaticaly when i load web page

    - by robert trudel
    When i load my web page, i have a dialog who open automatically. I don't want that. I put a image and when i click on it, i would like dialog open my js $("#deleteReportButton").click(function() { $("#deleteReportConfirm").dialog("open"); }); $("#deleteReportConfirm").dialog({ resizable: false, height: 140, modal: true, buttons: { "Delete report": function() { $(this).dialog("close"); }, Cancel: function() { $(this).dialog("close"); } } }); html part <div id="deleteReportConfirm" title="Delete report?" style="display:none"> <p>....</p> </div> <img id="deleteReportButton" src="/plato/resources/images/delete.png" />

    Read the article

  • JQuery QuickSearch: Searching on load if the input isn't blank

    - by ckm-88
    I have used the jquery quicksearch plugin and it works fine, except for one problem. I would like the quicksearch to run when the page is loaded. I have created a second quicksearch function (which is called when the page is loaded) and changed the bind to something else, but it won't work on "load" or "ready". If I change the bind to "focus" and put the focus onto the textfield it works, but only in IE. The reason I want to do this is because there is a "view" link where the user leaves the page. When they come back, I would like the search results to be as they left them.

    Read the article

  • Javascript - use to load xml data from URL

    - by spenf
    I have this url with some xml data in it: http://64.182.231.116/~spencerf/union_college/Upperclass_Sample_Menu.xml And I would like to load this xml data into my javascript script so I can parse it. I am using parse.com Javascript SDK, in there cloud code. Here is the code I have tried: Parse.Cloud.define("next", function(request, response) { response.success("Hello world!"); $.ajax({ url: 'http://64.182.231.116/~spencerf/union_college/Upperclass_Sample_Menu.xml', // name of file you want to parse dataType: "xml", // type of file you are trying to read success: parse, // name of the function to call upon success error: function(){alert("Error: Something went wrong");} }); }); But when I run this I get an error: $ is not defined at main.js:

    Read the article

  • Load custom class properly

    - by LinusAn
    I have a custom class which I want to "load" inside the firstViewController and then access it from other classes by segues. My Problem is, I can't even access and change the instance variable inside the firstViewController. Somehow I'm "loading" it wrong. Here is the code I used until now: inside viewController.h @property (strong, nonatomic) myClass *newClass; inside viewController.m @synthesize newClass; I then try to access it by: self.newClass.string = @"myString"; if(newClass.string == @"myString"){ NSLog(@"didn't work"); } Well, I get "didn't work". Why is that? When I write myClass *newClass = [myClass new]; It does work. But the class and its properties gets overwritten every time the ViewController loads again. What would you recommend? Thank you very much.

    Read the article

  • call .js file after ajax load

    - by asovgir
    I am trying to apply a .js file to a page I loaded via ajax (since ajax automatically strips the content of all the javascript). var url; var textUrl = 'local/file.js'; $('a').click(function() { url = $(this).attr('href'); $('.secondaryDiv').load(url, function() { $.getScript(textUrl, function(data, textStatus, jqxhr) { console.log(data); //data returned from getScript console.log(textStatus); //return "success" console.log(jqxhr.status); //200 }); }); return false; }); Am I approaching this the right way? I tried everything I could think of and I can't get it to work

    Read the article

  • DOMDocument::load in PHP 5

    - by Abs
    Hello all, I open a 10MB+ XML file several times in my script in different functions: $dom = DOMDocument::load( $file ) or die('couldnt open'); 1) Is the above the old style of loading a document? I am using PHP 5. Oppening it statically? 2) Do I need to close the loading of the XML file, if possible? I suspect its causing memory problems because I loop through the nodes of the XML file several thousand times and sometimes my script just ends abruptly. Thanks all for any help

    Read the article

  • Simple MSBuild Configuration: Updating Assemblies With A Version Number

    - by srkirkland
    When distributing a library you often run up against versioning problems, once facet of which is simply determining which version of that library your client is running.  Of course, each project in your solution has an AssemblyInfo.cs file which provides, among other things, the ability to set the Assembly name and version number.  Unfortunately, setting the assembly version here would require not only changing the version manually for each build (depending on your schedule), but keeping it in sync across all projects.  There are many ways to solve this versioning problem, and in this blog post I’m going to try to explain what I think is the easiest and most flexible solution.  I will walk you through using MSBuild to create a simple build script, and I’ll even show how to (optionally) integrate with a Team City build server.  All of the code from this post can be found at https://github.com/srkirkland/BuildVersion. Create CommonAssemblyInfo.cs The first step is to create a common location for the repeated assembly info that is spread across all of your projects.  Create a new solution-level file (I usually create a Build/ folder in the solution root, but anywhere reachable by all your projects will do) called CommonAssemblyInfo.cs.  In here you can put any information common to all your assemblies, including the version number.  An example CommonAssemblyInfo.cs is as follows: using System.Reflection; using System.Resources; using System.Runtime.InteropServices;   [assembly: AssemblyCompany("University of California, Davis")] [assembly: AssemblyProduct("BuildVersionTest")] [assembly: AssemblyCopyright("Scott Kirkland & UC Regents")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyTrademark("")]   [assembly: ComVisible(false)]   [assembly: AssemblyVersion("1.2.3.4")] //Will be replaced   [assembly: NeutralResourcesLanguage("en-US")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Cleanup AssemblyInfo.cs & Link CommonAssemblyInfo.cs For each of your projects, you’ll want to clean up your assembly info to contain only information that is unique to that assembly – everything else will go in the CommonAssemblyInfo.cs file.  For most of my projects, that just means setting the AssemblyTitle, though you may feel AssemblyDescription is warranted.  An example AssemblyInfo.cs file is as follows: using System.Reflection;   [assembly: AssemblyTitle("BuildVersionTest")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Next, you need to “link” the CommonAssemblyinfo.cs file into your projects right beside your newly lean AssemblyInfo.cs file.  To do this, right click on your project and choose Add | Existing Item from the context menu.  Navigate to your CommonAssemblyinfo.cs file but instead of clicking Add, click the little down-arrow next to add and choose “Add as Link.”  You should see a little link graphic similar to this: We’ve actually reduced complexity a lot already, because if you build all of your assemblies will have the same common info, including the product name and our static (fake) assembly version.  Let’s take this one step further and introduce a build script. Create an MSBuild file What we want from the build script (for now) is basically just to have the common assembly version number changed via a parameter (eventually to be passed in by the build server) and then for the project to build.  Also we’d like to have a flexibility to define what build configuration to use (debug, release, etc). In order to find/replace the version number, we are going to use a Regular Expression to find and replace the text within your CommonAssemblyInfo.cs file.  There are many other ways to do this using community build task add-ins, but since we want to keep it simple let’s just define the Regular Expression task manually in a new file, Build.tasks (this example taken from the NuGet build.tasks file). <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <UsingTask TaskName="RegexTransform" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll"> <ParameterGroup> <Items ParameterType="Microsoft.Build.Framework.ITaskItem[]" /> </ParameterGroup> <Task> <Using Namespace="System.IO" /> <Using Namespace="System.Text.RegularExpressions" /> <Using Namespace="Microsoft.Build.Framework" /> <Code Type="Fragment" Language="cs"> <![CDATA[ foreach(ITaskItem item in Items) { string fileName = item.GetMetadata("FullPath"); string find = item.GetMetadata("Find"); string replaceWith = item.GetMetadata("ReplaceWith"); if(!File.Exists(fileName)) { Log.LogError(null, null, null, null, 0, 0, 0, 0, String.Format("Could not find version file: {0}", fileName), new object[0]); } string content = File.ReadAllText(fileName); File.WriteAllText( fileName, Regex.Replace( content, find, replaceWith ) ); } ]]> </Code> </Task> </UsingTask> </Project> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If you glance at the code, you’ll see it’s really just going a Regex.Replace() on a given file, which is exactly what we need. Now we are ready to write our build file, called (by convention) Build.proj. <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildProjectDirectory)\Build.tasks" /> <PropertyGroup> <Configuration Condition="'$(Configuration)' == ''">Debug</Configuration> <SolutionRoot>$(MSBuildProjectDirectory)</SolutionRoot> </PropertyGroup>   <ItemGroup> <RegexTransform Include="$(SolutionRoot)\CommonAssemblyInfo.cs"> <Find>(?&lt;major&gt;\d+)\.(?&lt;minor&gt;\d+)\.\d+\.(?&lt;revision&gt;\d+)</Find> <ReplaceWith>$(BUILD_NUMBER)</ReplaceWith> </RegexTransform> </ItemGroup>   <Target Name="Go" DependsOnTargets="UpdateAssemblyVersion; Build"> </Target>   <Target Name="UpdateAssemblyVersion" Condition="'$(BUILD_NUMBER)' != ''"> <RegexTransform Items="@(RegexTransform)" /> </Target>   <Target Name="Build"> <MSBuild Projects="$(SolutionRoot)\BuildVersionTest.sln" Targets="Build" /> </Target>   </Project> Reviewing this MSBuild file, we see that by default the “Go” target will be called, which in turn depends on “UpdateAssemblyVersion” and then “Build.”  We go ahead and import the Bulid.tasks file and then setup some handy properties for setting the build configuration and solution root (in this case, my build files are in the solution root, but we might want to create a Build/ directory later).  The rest of the file flows logically, we setup the RegexTransform to match version numbers such as <major>.<minor>.1.<revision> (1.2.3.4 in our example) and replace it with a $(BUILD_NUMBER) parameter which will be supplied externally.  The first target, “UpdateAssemblyVersion” just runs the RegexTransform, and the second target, “Build” just runs the default MSBuild on our solution. Testing the MSBuild file locally Now we have a build file which can replace assembly version numbers and build, so let’s setup a quick batch file to be able to build locally.  To do this you simply create a file called Build.cmd and have it call MSBuild on your Build.proj file.  I’ve added a bit more flexibility so you can specify build configuration and version number, which makes your Build.cmd look as follows: set config=%1 if "%config%" == "" ( set config=debug ) set version=%2 if "%version%" == "" ( set version=2.3.4.5 ) %WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild Build.proj /p:Configuration="%config%" /p:build_number="%version%" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now if you click on the Build.cmd file, you will get a default debug build using the version 2.3.4.5.  Let’s run it in a command window with the parameters set for a release build version 2.0.1.453.   Excellent!  We can now run one simple command and govern the build configuration and version number of our entire solution.  Each DLL produced will have the same version number, making determining which version of a library you are running very simple and accurate. Configure the build server (TeamCity) Of course you are not really going to want to run a build command manually every time, and typing in incrementing version numbers will also not be ideal.  A good solution is to have a computer (or set of computers) act as a build server and build your code for you, providing you a consistent environment, excellent reporting, and much more.  One of the most popular Build Servers is JetBrains’ TeamCity, and this last section will show you the few configuration parameters to use when setting up a build using your MSBuild file created earlier.  If you are using a different build server, the same principals should apply. First, when setting up the project you want to specify the “Build Number Format,” often given in the form <major>.<minor>.<revision>.<build>.  In this case you will set major/minor manually, and optionally revision (or you can use your VCS revision number with %build.vcs.number%), and then build using the {0} wildcard.  Thus your build number format might look like this: 2.0.1.{0}.  During each build, this value will be created and passed into the $BUILD_NUMBER variable of our Build.proj file, which then uses it to decorate your assemblies with the proper version. After setting up the build number, you must choose MSBuild as the Build Runner, then provide a path to your build file (Build.proj).  After specifying your MSBuild Version (equivalent to your .NET Framework Version), you have the option to specify targets (the default being “Go”) and additional MSBuild parameters.  The one parameter that is often useful is manually setting the configuration property (/p:Configuration="Release") if you want something other than the default (which is Debug in our example).  Your resulting configuration will look something like this: [Under General Settings] [Build Runner Settings]   Now every time your build is run, a newly incremented build version number will be generated and passed to MSBuild, which will then version your assemblies and build your solution.   A Quick Review Our goal was to version our output assemblies in an automated way, and we accomplished it by performing a few quick steps: Move the common assembly information, including version, into a linked CommonAssemblyInfo.cs file Create a simple MSBuild script to replace the common assembly version number and build your solution Direct your build server to use the created MSBuild script That’s really all there is to it.  You can find all of the code from this post at https://github.com/srkirkland/BuildVersion. Enjoy!

    Read the article

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • HAProxy error: Some configuration options require full privileges, so global.uid cannot be changed

    - by Athena Wisdom
    After adding the line to /etc/haproxy/haproxy.cfg as part of creating a transparent proxy, source 0.0.0.0 usesrc clientip restarting haproxy starts giving an error ~# service haproxy reload * Reloading haproxy haproxy [ALERT] 230/153724 (1140) : [/usr/sbin/haproxy.main()] Some configuration options require full privileges, so global.uid cannot be changed. I'm already running service haproxy reload as root. What else do we have to do? Thank you!

    Read the article

  • HAProxy: Display a "BADREQ" | BADREQ's by the thousands

    - by GruffTech
    My HAProxy Configuration. #HA-Proxy version 1.3.22 2009/10/14 Copyright 2000-2009 Willy Tarreau <[email protected]> global maxconn 10000 spread-checks 50 user haproxy group haproxy daemon stats socket /tmp/haproxy log localhost local0 log localhost local1 notice defaults mode http maxconn 50000 timeout client 10000 option forwardfor except 127.0.0.1 option httpclose option httplog listen dcaustin 0.0.0.0:80 mode http timeout connect 12000 timeout server 60000 timeout queue 120000 balance roundrobin option httpchk GET /index.html log global option httplog option dontlog-normal server web1 10.10.10.101:80 maxconn 300 check fall 1 server web2 10.10.10.102:80 maxconn 300 check fall 1 server web3 10.10.10.103:80 maxconn 300 check fall 1 server web4 10.10.10.104:80 maxconn 300 check fall 1 listen stats 0.0.0.0:9000 mode http balance log global timeout client 5000 timeout connect 4000 timeout server 30000 stats uri /haproxy HAProxy is running, and the socket is working... adam@dcaustin:/etc/haproxy# echo "show info" | socat stdio /tmp/haproxy Name: HAProxy Version: 1.3.22 Release_date: 2009/10/14 Nbproc: 1 Process_num: 1 Pid: 6320 Uptime: 0d 0h14m58s Uptime_sec: 898 Memmax_MB: 0 Ulimit-n: 20017 Maxsock: 20017 Maxconn: 10000 Maxpipes: 0 CurrConns: 47 PipesUsed: 0 PipesFree: 0 Tasks: 51 Run_queue: 1 node: dcaustin desiption: Errors show nothing from socket... adam@dcaustin:/etc/haproxy# echo "show errors" | socat stdio /tmp/haproxy adam@dcaustin:/etc/haproxy# However... My Error log is exploding with "badrequests" with the Error code cR. cR (according to 1.3 documentation) is The "timeout http-request" stroke before the client sent a full HTTP request. This is sometimes caused by too large TCP MSS values on the client side for PPPoE networks which cannot transport full-sized packets, or by clients sending requests by hand and not typing fast enough, or forgetting to enter the empty line at the end of the request. The HTTP status code is likely a 408 here. Correct on the 408, but we're getting literally thousands of these requests every hour. (This log snippet is an clip for about 10 seconds of time...) Jun 30 11:08:52 localhost haproxy[6320]: 92.22.213.32:26448 [30/Jun/2011:11:08:42.384] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10002 408 212 - - cR-- 35/35/18/0/0 0/0 "<BADREQ>" Jun 30 11:08:54 localhost haproxy[6320]: 71.62.130.24:62818 [30/Jun/2011:11:08:44.457] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 39/39/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 84.73.75.236:3589 [30/Jun/2011:11:08:45.021] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10008 408 212 - - cR-- 35/35/15/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 69.39.20.190:49969 [30/Jun/2011:11:08:45.709] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 37/37/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:56 localhost haproxy[6320]: 2.29.0.9:58772 [30/Jun/2011:11:08:46.846] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 43/43/22/0/0 0/0 "<BADREQ>" Jun 30 11:08:57 localhost haproxy[6320]: 212.139.250.242:57537 [30/Jun/2011:11:08:47.568] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 42/42/21/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55046 [30/Jun/2011:11:08:48.559] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 46/46/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55044 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10004 408 212 - - cR-- 45/45/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55045 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10005 408 212 - - cR-- 44/44/24/0/0 0/0 "<BADREQ>" Jun 30 11:09:00 localhost haproxy[6320]: 68.197.56.2:52781 [30/Jun/2011:11:08:50.975] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 49/49/28/0/0 0/0 "<BADREQ>" From what I read on google, if i wanted to see what the bad requests are, I can show errors to the socket and it will spit them out. We do run a pretty heavily trafficed website and the percentage of "BADREQS" to normal requests is quite low, but I'd like to be able to get ahold of what that request WAS so I can debug it. stats # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max, dcaustin,FRONTEND,,,64,120,50000,88433,105889100,2553809875,0,0,4641,,,,,OPEN,,,,,,,,,1,1,0,,,,0,45,0,128, dcaustin,web1,0,0,10,28,300,20941,25402112,633143416,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,1,,20941,,2,11,,30, dcaustin,web2,0,0,9,30,300,20941,25026691,641475169,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,2,,20941,,2,11,,30, dcaustin,web3,0,0,10,27,300,20940,30116527,635015040,,0,,0,9,0,0,UP,1,1,0,0,0,2208,0,,1,1,3,,20940,,2,10,,31, dcaustin,web4,0,0,5,28,300,20940,25343770,643209546,,0,,0,8,0,0,UP,1,1,0,0,0,2208,0,,1,1,4,,20940,,2,11,,31, dcaustin,BACKEND,0,0,34,95,50000,83762,105889100,2553809875,0,0,,0,34,0,0,UP,4,4,0,,0,2208,0,,1,1,0,,83762,,1,43,,122, 88500 "Sessions" and 4500 errors. in the last 20 minutes.

    Read the article

  • Extract Key and Certificate from Kemp Loadmaster?

    - by Matt Simmons
    I'm trying very hard to get away from a set of Kemp Loadmasters that I bought years ago to provide HA access to our website. Part of that process is going to be putting the key and certificate in the new solution (HAproxy with nginx doing SSL). Unfortunately, I've come up against a problem... The Kemp has built-in certificate management, and it generates CSR's at the touch of a button. It also supported importing of signed certificates, however it does not, so far as I can tell, allow any kind of export of the key itself. There is a "backup key and certificates" ability, however here's the text from the manual: LoadMaster supports exporting of ALL certificate information. This includes private key, host and intermediate certificates. The export file is designed to be used for import into another LoadMaster and is encrypted. Export and import can be completed using the WUI at Certificates -> Backup/Restore Certs. Please make sure to note the pass phrase used to create the export, it will be required to complete the import. You can selectively resort only Virtual Service certificates including private keys, intermediate certificates or both. Well, that is great, but as for actually DEALING with the certs, I'm apparently out of luck. Of course, I'm not going to give up that easily. I ran "file" on the saved cert bundle and got this: $ file client1.certs.backup client1.certs.backup: gzip compressed data, from Unix Well, awesome, I thought. Maybe it's just a .tar.gz, so I unzipped it, and that went fine, but my attempts to untar it didn't work, and running "file" on it now just gives this: $ file client1.certs.backup client1.certs.backup: data So that's where I'm stuck. Anyone have experience with these?

    Read the article

  • GlusterFS vs Ceph, which is better for production use for the moment?

    - by Mickey Shine
    I am evaluating GlusterFS and Ceph, seems Gluster is FUSE based which means it may be not as fast as Ceph. But looks like Gluster got a very friendly control panel and is ease to use. Ceph was merged into linux kernel a few days ago and this indicates that it has much more potential energy and may be a good choice in the future. I am wondering which(even out of the two?) is a better choice for production use? It would be nice if you could share your practical experiences

    Read the article

  • Campus Network Design - Firewalls

    - by user3081239
    I am designing a campus network, and the design looks like this: LINX is The London Internet Exchange and JANET is Joint Academic Network. My goal is an almost-fully redundant with high availability, because it will have to support about 15k people, including academic staff, administrative staff and students. I have read some documents in the process , but I am still not sure about some aspects. I want to dedicate this one to firewalls: what are the driving factors in deciding to employ a dedicated firewall, instead of an embedded firewall in the border router? From what I can see, an embedded firewall has these advantages: Easier to maintain Better integration One less hop Less space requirement Cheaper Dedicated firewall has the advantage of being modular. Is there anything else? What am I missing?

    Read the article

  • High CPU from httpd process

    - by KHWeb
    I am currently getting high CPU on a server that is just running a couple of sites with very low traffic. One of the sites is in still development going live soon. However, this site is very very slow...When browsing through its pages I can see that the CPU goes from 30% to 100% for httpd (see top output below). I have tuned httpd & MySQL, Apache Solr, Tomcat for high performance, and I am using APC. Not sure what to do from here or how to find the culprit as I have a bunch of messages on the httpd log and have been chasing dead ends for some time...any help is greatly appreciated. Server: AuthenticAMD, Quad-Core AMD Opteron(tm) Processor 2352, RAM 16GB Linux 2.6.27 64-bit, Centos 5.5 Plesk 9.5.4, MySQL 5.1.48, PHP 5.2.17 Apache/2.2.3 (CentOS) DAV/2 mod_jk/1.2.15 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 PHP/5.2.17 mod_perl/2.0.4 Perl/v5.8.8 Tomcat6-6.0.29-1.jpp5, Tomcat-native-1.1.20-1.el5, Apache Solr top 17595 apache 20 0 1825m 507m 10m R 100.4 3.2 0:17.50 httpd 17596 apache 20 0 1565m 247m 9936 R 83.1 1.5 0:10.86 httpd 17598 apache 20 0 1430m 110m 6472 S 54.5 0.7 0:08.66 httpd 17599 apache 20 0 1438m 124m 12m S 37.2 0.8 0:11.20 httpd 16197 mysql 20 0 13.0g 2.0g 5440 S 9.6 12.6 297:12.79 mysqld 17617 root 20 0 12748 1172 812 R 0.7 0.0 0:00.88 top 8169 tomcat 20 0 4613m 268m 6056 S 0.3 1.7 6:40.56 java httpd error_log [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [info] mod_fcgid: Process manager 17593 started [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17594 for worker proxy:reverse [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 17594 for (*) [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17595 for worker proxy:reverse [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [notice] child pid 22782 exit signal Segmentation fault (11) [error] (43)Identifier removed: apr_global_mutex_lock(jk_log_lock) failed [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0x7fd29a5478c0 rmm=0x7fd29a547918 for VHOST: example.com [info] APR LDAP: Built with OpenLDAP LDAP SDK [info] LDAP: SSL support available [info] Init: Seeding PRNG with 256 bytes of entropy [info] Init: Generating temporary RSA private keys (512/1024 bits) [info] Init: Generating temporary DH parameters (512/1024 bits) [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4265 indexes [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [debug] ssl_scache_shmcb.c(623): division_offset = 96 [debug] ssl_scache_shmcb.c(625): division_size = 15997 [debug] ssl_scache_shmcb.c(627): queue_size = 2136 [debug] ssl_scache_shmcb.c(629): index_num = 133 [debug] ssl_scache_shmcb.c(631): index_offset = 8 [debug] ssl_scache_shmcb.c(633): index_size = 16 [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [debug] ssl_scache_shmcb.c(637): cache_data_size = 13853 [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory()

    Read the article

  • Suggestions for programming language and database for a high end database querying system (>50 milli

    - by mmdave
    These requirements are sketchy at the moment, but will appreciate any insights. We are exploring what would be required to build a system that can handle 50 database million queries a day - specifiically from the programming language and database choice Its not a typical website, but an API / database accessing through the internet. Speed is critical. The application will primarily receive these inputs (about a few kb each) and will have to address each of them via the database lookup. Only a few kb will be returned. The server will be run over https/ssl.

    Read the article

  • HTTPS Stunnel and Haproxy

    - by panalbish
    I am trying to use stunnel infront of Haproxy for SSL support. SSL certificates are located according to stunnel configuration. I am also able to get the https connection, but every time I use https, session get lost. I am not using tomcat 8443 port to get the secure content. Is is possible to get the https connection only using stunnel and haproxy? And my requirement is to have https connection once user get logged in.

    Read the article

  • HAProxy -- pause/queue all traffic without losing requests

    - by Marc
    I basically have the same problem as mentioned in this thread -- I would like to temporarily suspend all requests to all servers of a certain backend, so that I can upgrade the backend and the database it uses. Since this is a live system, I would like to queue up requests, and send them to the backend servers once they've been upgraded. Since I'm doing a database upgrade with the code change, I have to upgrade all backend servers simultaneously, so I can't just bring one down at a time. I tried using the tcp-request options combined with removing the static healthcheck file as mentioned in that thread, but had no luck. Setting the default "maxconn" value to 0 seems to pause and queue connections as desired, but then there seems to be no way to increase the value back to a positive number without restarting HAProxy, which kills all requests that had been queued up until that point. (The "hot-reconfiguration" options using -sf and -st start a new process, which doesn't seem to do what I want). Is what I'm trying to do possible?

    Read the article

  • Netscaler - Involving Response Body Replacement with GeoIP Implementation

    - by MrGoodbyte
    I have a Netscaler (10000) NS9.3: Build 52.3.cl There is a document contains a short tutorial about GeoIP database implementation at http://support.citrix.com/article/CTX130701 I have added location database by following this tutorial successfully but instead of dropping connections I need to replace a string in content body. To do that, I created a rewrite action with those parameters. * Name: ns_country_replacement_action * Type: REPLACE_ALL * Target Expression: HTTP.RES.BODY(50000) * Replacement Text: HTTP.RES.HEADER("international") * Pattern: \{\/\*COUNTRY_NAME\*\/\} To insert header called "international", I try to create another rewrite action with those parameters. I'll add this one's policy prior. * Name: ns_country_injection_action * Type: INSERT_HTTP_HEADER * Header Name: international * Header Value: CLIENT.IP.SRC.MATCHES_LOCATION("*.US.*.*.*.*").NOT When I click on create button, it says; Compound expression syntax error, [.*.*").NOT^, 50] I'm not sure but I think that two things might cause this error. The expression I use for MATCHES_LOCATION is wrong but I use same expression in the tutorial. MATCHES_LOCATION().NOT returns BOOLEAN but the field expects STRING. How can I get this work? Do I use right tools to accomplish what I need to do? Thank you. -Umut

    Read the article

  • NLB 2 Windows Server 2003 Servers

    - by Paul Hinett
    I need to configure windows NLB on 2 dedicated servers I have. My main machine has been running for some time, with several domain names pointing to the servers primary IP address. Both servers have 2 NIC's installed, and both have several secondary public IP addresses available if needed? What IP address would I use for the cluster IP, does this IP need to be added to the IP list of both public NIC's ip address list? What IP addresses do I use for the host's dedicated IP? Please help, this is driving me nuts...i've taken down the server twice on accident today! Thank you in advance!

    Read the article

  • HAProxy: session stickiness triggered by response header possible?

    - by zoli
    I'm investigating HAProxy as a possible replacement for F5. F5 is capable of persisting a session based on a response header value: when HTTP_RESPONSE { set session [HTTP::header X-Session] if {$session ne ""} { persist add uie $session } } and then route all subsequent requests which contain the same session ID in a header, query parameter, path, etc. to the same machine, eg: when HTTP_REQUEST { set session [findstr [HTTP::path] "/session/" 9 / if {$session} { persist uie $session } } I'm wondering if this is even possible to do with HAProxy?

    Read the article

  • Limit Apache 2 Memory Usage

    - by UltraNurd
    I am running a hobby webserver off of an ancient Blue & White G3/300 running Debian PPC Squeeze 2.6.30. The performance is okay for a while after a restart, but it eventually gets more and more bogged down. Right now it's at 76 days uptime, and the main culprit seems to be the memory usage of 10+ apache2 processes. I think I need to lower the values for StartServers, MinSpareServers, and/or MaxSpareServers, but I'm not sure which one to adjust, and there are three sections for each depending on which mpm module is in use. How do I tell which of the following sections I need to change, and what are some reasonable values given that the box has 448 MB physical memory (weird upgrade history of one each 64, 128, and 256 sticks)? <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> There aren't any other instances of StartServers in my apache2.conf, but none of those mpm modules appear in mods-available or mods-enabled. Ideas? Thanks!

    Read the article

  • QoS / PBR Routing Questions

    - by Bernard
    I have a 50Mbs Satellite link and a 10Mbs Microwave link supplying a very remote location. Behind these links, I have a 6,400 seat network - with about 3,000 signed in at any one time. My goal is to send all of the Voip traffic (Google Chat, Magic Jack, Skype, Speakeasy, Vonage, Vonage PC, Yahoo) through the microwave link which has 100ms latency. The rest of the traffic can utilize any remaining bandwidth of the microwave link with excess being diverted to the higher latency (600ms) satellite connection. The problem I've had so far is that most automatic routing configurations weigh the bandwidth heavily for preference - and I'm only wanting latency considered. Additionally, I don't know if this can even be handled with the routing hardware I have at my disposal (Cisco 3640, 3745, & 3845). Any recommendations (or really good starting points) would be greatly appreciated.

    Read the article

  • HAProxy authenticated httpchk (health check)

    - by Markel
    I am using HAProxy on EC2 and using httpchk to manage node availability. I had used a pseudo-unique path as the health check route in an attempt to make sure only my servers responded to the health check. Earlier today I had an EC2 server fall out of existence, and before the haproxy config was auto-regenerated (controller issues), Amazon had reassigned the IP to someone whom 200's every request (honeypot?), my HAProxy host then pulled the server back into rotation and started distributing some of my traffic there until the controller recovered and removed the ip from the list. TLDR; Is there a way to add a server authentication method to HAProxy's httpchk?

    Read the article

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • F5/BigIP rule to redirect affinity-bound users from INACTIVE pool node to other ACTIVE node

    - by j pimmel
    We have several server nodes set up for the end users of our system and because we don't use any kind of session replication in the app servers, F5 maintains affinity for users with the ACTIVE node the client was first bound to. At times when we want to re-deploy the app, we change the F5 config and take a node out of the ACTIVE pool. Gradually the users filter off and we can deploy, but the process is a bit slow. We can't just dump all the users into a different node because - given the update heavy nature of the user activities - we could cause them to lose changes. That said, there is one URL/endpoint - call it http://site/product/list - which we know, when the client hits it, that we could shove them off the INACTIVE node they had affinity with and onto a different ACTIVE node. We have had a few tries writing an F5 rule along these lines, but haven't had much success so i thought I might ask here, assuming it's possible - I have no reason to think it's not based on what we have found so far.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >