Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 192/232 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • How test that ASP.NET MVC route redirects to other site?

    - by Matt Lacey
    Due to a prinitng error in some promotional material I have a site that is receiving a lot of requests which should be for one site arriving at another. i.e. The valid sites are http://site1.com/abc & http://site2.com/def but people are being told to go to http://site1.com/def. I have control over site1 but not site2. site1 contains logic for checking that the first part of the route is valid in an actionfilter, like this: public override void OnActionExecuting(ActionExecutingContext filterContext) { base.OnActionExecuting(filterContext); if ((!filterContext.ActionParameters.ContainsKey("id")) || (!manager.WhiteLabelExists(filterContext.ActionParameters["id"].ToString()))) { if (filterContext.ActionParameters["id"].ToString().ToLowerInvariant().Equals("def")) { filterContext.HttpContext.Response.Redirect("http://site2.com/def", true); } filterContext.Result = new ViewResult { ViewName = "NoWhiteLabel" }; filterContext.HttpContext.Response.Clear(); } } I'm not sure how to test the redirection to the other site though. I already have tests for redirecting to "NoWhiteLabel" using the MvcContrib Test Helpers, but these aren't able to handle (as far as I can see) this situation. How do I test the redirection to antoher site?

    Read the article

  • Erlang OTP application design

    - by Toby Hede
    I am struggling a little coming to grips with the OTP development model as I convert some code into an OTP app. I am essentially making a web crawler and I just don't quite know where to put the code that does the actual work. I have a supervisor which starts my worker: -behaviour(supervisor). -define(CHILD(I, Type), {I, {I, start_link, []}, permanent, 5000, Type, [I]}). init(_Args) -> Children = [ ?CHILD(crawler, worker) ], RestartStrategy = {one_for_one, 0, 1}, {ok, {RestartStrategy, Children}}. In this design, the Crawler Worker is then responsible for doing the actual work: -behaviour(gen_server). start_link() -> gen_server:start_link(?MODULE, [], []). init([]) -> inets:start(), httpc:set_options([{verbose_mode,true}]), % gen_server:cast(?MODULE, crawl), % ok = do_crawl(), {ok, #state{}}. do_crawl() -> % crawl! ok. handle_cast(crawl}, State) -> ok = do_crawl(), {noreply, State}; do_crawl spawns a fairly large number of processes and requests that handle the work of crawling via http. Question, ultimately is: where should the actual crawl happen? As can be seen above I have been experimenting with different ways of triggering the actual work, but still missing some concept essential for grokering the way things fit together. Note: some of the OTP plumbing is left out for brevity - the plumbing is all there and the system all hangs together

    Read the article

  • client-server syncing methodology [theoretical]

    - by Kenneth Ballenegger
    I'm in the progress of building an web-app that syncs with an iOS client. I'm currently tackling trying to figure out how to go about about syncing. I've come up with following two directions: I've got a fairly simple server web-app with a list of items. They are ordered by date modified and as such syncing the order does not matter. One direction I'm considering is to let the client deal with syncing. I've already got an API that lets the client get the data, as well as do certain actions on it, such as update, add or remove single items. I was considering: 1) on each sync asking the server for all items modified since the last successful sync and updating the local records based on what's returned by the server, and 2) building a persistent queue of create / remove / update requests on the client, and keeping them until confirmation by the server. The risk with this approach is that I'm basically asking each side to send changes to the other side, hoping it works smoothly, but risking a diversion at some point. This would probably be more bandwidth-efficient, though. The other direction I was considering was a more traditional model. I would have a "sync" process in which the client would send its whole list to the server (or a subset since last modified sync), the server would update the data on the server (by fixing conflicts by keeping the last modified item, and keeping deleted items with a deleted = 1 field), and the server would return an updated list of items (since last successful sync) which the client would then replace its data with. Thoughts?

    Read the article

  • Code Own Socket Server or Use Red5/ElectroServer on Amazon EC2?

    - by Travis
    I've been thinking for a long time about working on a multiplayer game in Flash. I need updates frequently enough that ajax requests won't work so I need to use a socket server. The system will eventually have enough objects/players that I would consider it an MMO. I would like to set up a scalable system on Amazon's EC2. (Which probably effects my choice of server) This architecture would hopefully allow the game to grow without many changes over time. (Using a domain decomposition technique or something similar) Heres my internal debate: Should I a. Code my own socket server in C++ or Java? b. Use the free and open source Red5 socket server for Flash? or c. Pay the licensing fees and go for Electroserver? I consider myself a decent developer, but am at an impasse as to what road to go down. I'm not sure if I, could develop/would need, the features of one of the prepackaged socket servers. I'm also not sure if the prepackaged servers would work well in an Amazon EC2 environment and take full advantage of its features. Any help or guidance would be greatly appreciated.

    Read the article

  • Disable proxy for entire application?

    - by Brent
    Ever since upgrading to Visual Studio 2010, I'm running into an issue where the first web request of any type (WebRequest, WebClient, etc.) hangs for about 20 seconds before completing. Subsequent calls work quickly. I've narrowed down the problem to a proxy issue. If I manually disable proxy settings, I don't experience this delay: Dim wrq As WebRequest = WebRequest.Create(Url) wrq.Proxy = Nothing What's strange is that there are no proxy settings enabled on this machine in Internet Options. What I'm wondering is if there is a way to disable proxy settings for my entire project in one shot without explicitly disabling as above for every web object. The main reason I want to be able to do this is that I'm trying to use an API (http://code.google.com/p/google-api-for-dotnet/) which uses web requests, but does not provide any way to manually disable proxy settings. I have found some information suggesting that I need to add some proxy information to the app.config file, but I get errors building my program if I make an edits to that file. Can anyone point me in the right direction?

    Read the article

  • Implementing a State Machine in Angular.js to control routing

    - by ldn_tech_exec
    Can anyone help me with integrating a state machine to control routing? What's the best method to do this? Create a service? I need to basically intercept every $location request, run the state machine and let it figure out what the next $location.path should be. Think of the problem like a bank of questions that get added and removed over time. The user visits once in a while, passes in the user's answers object to the statemachine, and the statemachine figures out which question to load. This is my pseudocode, but i need to figure out where to put this or what event I can hook into to make sure all route requests are passed through the machine. Do I need a specific stateMachine controller? Do I create a service? Where do I use the service? Do I need to override $locationProvider? $scope.user.answers = [{ id: 32, answer: "whatever" }, { id:33, answer: "another answer" }] $scope.questions = [{ id:32, question:"what is your name?", path:"/question/1" },{ id:34, question:"how old are you?", path:"/question/2" }] var questions = $scope.questions; angular.forEach(questions, function(question) { if(question.id !exist in $scope.user.answers.id) { $location.path = question.path break; }); Thanks

    Read the article

  • I'm trying to grasp the concept of creating a program that uses a MS SQL database, but I'm used to r

    - by Sergio Tapia
    How can I make a program use a MS SQL server, and have that program work on whatever computer it's installed on. If you've been following my string of questions today, you'd know that I'm making an open source and free Help Desk suite for small and medium businesses. The client application. The client application is a Windows Forms app. On installation and first launch on every client machine, it'll ask for the address of the main Help Desk server. The server. Here I plan to handle all incoming help requests, show them to the IT guys, and provide WCF services for the Client application to consume. My dilemma lies in that, I know how to make the program run on my local machine; but I'm really stumped on how to make this work for everyone who wants to download and install the server bit on their Windows Server. Would I have to make an SQL Script and have it run on the MS SQL server when a user wants to install the 'server' application? Many thanks to all for your valuable time and effort to teach me. It's really really appreciated. :)

    Read the article

  • How would you protect a database of links from being scraped?

    - by Yegor
    I have a large database of links, which are all sorted in specific ways and are attached to other information, which is valuable (to some people). Currently my setup (which seems to work) simply calls a php file like link.php?id=123, it logs the request with a timestamp into the DB. Before it spits out the link, it checks how many requests were made from that IP in the last 5 minutes. If its greater than x, it redirects you to a captcha page. That all works fine and dandy, but the site has been getting really popular (as well as been getting DDOsed for about 6 weeks), so php has been getting floored, so Im trying to minimize the times I have to hit up php to do something. I wanted to show links in plain text instead of thru link.php?id= and have an onclick function to simply add 1 to the view count. Im still hitting up php, but at least if it lags, it does so in the background, and the user can see the link they requested right away. Problem is, that makes the site REALLY scrapable. Is there anything I can do to prevent this, but still not rely on php to do the check before spitting out the link?

    Read the article

  • Editing a User's Likes on Facebook

    - by Ed Marty
    I've been looking at the Facebook API to find some way to edit a user's Likes (that is, add or remove items from https://graph.facebook.com/me/likes/). The API doesn't say anything about it specifically, but does say this: You can publish to the Facebook graph by issuing HTTP POST requests to the appropriate connection URLs above. Where above, one of the connection URLs is the aforementioned https://graph.facebook.com/me/likes link. However, there's no documentation for the PROFILE_ID/likes post, and whenever I try to post it returns the error "invalid post_id". I assume this is because to like something, you post a request to POST_ID/likes. It's a bit inconsistent. What I'm trying to do is get the user's profile to add a Page to their likes (by posting using the page's id as an "id" parameter in the post body). However, it seems like there's just no way to edit user's likes. At the end of the day, I just want to allow a user to click a button in my application (mobile device application, not a web app) and have them add our Facebook page into their list of pages, and I've found no way of doing that short of presenting our page to them and making them click on the "Like" button manually. Many other things are supported without showing the Facebook website, like posting to their wall or making albums, but I can't find anything to do this. Any ideas?

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • Spring MVC: How to resolve the path to subdirectories of the root 'JSP' folder in a web application

    - by chrisjleu
    What is a simple way to resolve the path to a JSP file that is not located in the root JSP directory of a web application using SpringMVCs viewResolvers? For example, suppose we have the following web application structure: web-app |-WEB-INF |-jsp |-secure |-admin.jsp |-admin2.jsp index.jsp login.jsp I would like to use some out-of-the-box components to resolve the JSP files within the jsp root folder and the secure subdirectory. I have a *-servlet.xml file that defines: an out-of-the-box, InternalResourceViewResolver: <bean id="jspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/jsp/"></property> <property name="suffix" value=".jsp"></property> </bean> a handler mapping: <bean id="handlerMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <props> <prop key="/index.htm">urlFilenameViewController</prop> <prop key="/login.htm">urlFilenameViewController</prop> <prop key="/secure/**">urlFilenameViewController</prop> </props> </property> </bean> an out-of-the-box UrlFilenameViewController controller: <bean id="urlFilenameViewController" class="org.springframework.web.servlet.mvc.UrlFilenameViewController"> </bean> The problem I have is that requests to the JSPs in the secure directory cannot be resolved, as the jspViewResolver only has a prefix defined as /jsp/ and not /jsp/secure/. Is there a way to handle subdirectories like this? I would prefer to keep this structure because I'm also trying to make use of Spring Security and having all secure pages in a subdirectory is a nice way to do this. There's probably a simple way to acheive this but I'm new to Spring and the Spring MVC framework so any pointers would be appreciated.

    Read the article

  • SignalR - Handling disconnected users

    - by guilhermeGeek
    Hy, I'm using the signalR library on a project to handle an notification and chat modules. I've a table on an database to keep a track of online users. The HUB for chat is inheriting IDisconnect where i disconnect the user. After disconnecting the user, i warm the users about that event. At this point, i check if the disconnect user is the client. If it's, then i call an method on HUB to reconnect the user (just update the table). I do this because with the current implementation, once the user closes a tab on the browser it calls the Disconnect task but he could have another tab opened. I've not tested (with larger requests) this module yet, but on my development server it could take a few seconds between the IDisconnect event, and the request from the user to connect again. I'm concerned with my implementation to handle disconnected users from the chat but i can't see another way to improve this. If possible, could someone give me a advice on this, or this is the only solution that i've?

    Read the article

  • Google App Engine - Uploading blobs and authentication

    - by Keyur
    (I tried asking this on the GAE forums but didn't get an answer so am trying it here.) Currently to upload blobs, the app engine's blob store service creates a unique one- time URL that a user can post blobs to. My requirement is that I only want authenticated / authorized users to post blobs in my application. I can achieve this currently if the page that includes the multipart form to upload blobs is in my application. However, I am looking to providing a "REST API" for my users to upload their blobs. While it is true that the one-time nature of the upload URL mitigates the chances of rogue use but it's still possible. I was wondering if there is anyone on the app engine team here that can consider a feature where developers can register an upload listener. (Or if there is already a way, I'll be all ears). A standard servlet filter could also potentially do the job. This will give us an opportunity to authenticate / validate / decorate requests before the request gets forwarded to the blob store service. Thanks, Keyur

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • WCF service returns error 500 on /js request

    - by Cine
    I have a wcf service that randomly begins to fail when requesting the autogenerated javascript that wcf supports making. But I have no luck tracking down why. The js thing is part of the wcf featureset, so I dont know how it can suddenly begin to fail and be unable to work until IIS is recycled. The http log gives me: 2010-06-10 09:11:49 W3SVC2095255988 myip GET /path/myservice.svc/js _=1276161113900 80 - ip browser 500 0 0 So its an error 500, and that is about the only thing I can figure out. The event log contains no information. Requests to /path/myservice.svc works just fine. After recycling IIS it works again, and some days later it begins to fail until I recycle IIS. <service name="path.myservice" behaviorConfiguration="b"> <endpoint address="" behaviorConfiguration="eb" binding="webHttpBinding" contract="path.Imyservice" /> </service> ... <endpointBehaviors> <behavior name="eb"> <enableWebScript /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="b"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> I dont see any problems in the web.config settings either. Any clues how I can track down what the problem is?

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • jquery .submit live click runs more than once

    - by fxuser
    I use the following code to run my form ajax requests but when i use the live selector on a button i can see the ajax response fire 1 time, then if i re-try it 2 times, 3 times, 4 times and so on... I use .live because i also have a feature to add a post and that appears instantly so the user can remove it without refreshing the page... Then this leads to the above problem... using .click could solve this but it's not the ideal solution i'm looking for... jQuery.fn.postAjax = function(success_callback, show_confirm) { this.submit(function(e) { e.preventDefault(); if (show_confirm == true) { if (confirm('Are you sure you want to delete this item? You can\'t undo this.')) { $.post(this.action, $(this).serialize(), $.proxy(success_callback, this)); } } else { $.post(this.action, $(this).serialize(), $.proxy(success_callback, this)); } return false; }) return this; }; $(document).ready(function() { $(".delete_button").live('click', function() { $(this).parent().postAjax(function(data) { if (data.error == true) { } else { } }, true); }); });? EDIT: temporary solution is to change this.submit(function(e) { to this.unbind('submit').bind('submit',function(e) { the problem is how can i protect it for real because people who know how to use Firebug or the same tool on other browsers can easily alter my Javascript code and re-create the problem

    Read the article

  • Architecture Guidance Needed?

    - by vijay
    We are about to automate number of process for our reporting team. (The reports are like daily reports, weekly reports, monthly reports, etc..) Mostly the process is like pulling some data from the oracle and then fill them in particular excel template files. Each reports and so their templates are different from each other. Except the excel file manipulation, there are hardly any business logic behind these. Client wanted an integrated tool and all the automated processes are placed as menus/submenus. Right now roughly there are around 30 process waiting to be automated. And we are expecting more new reports in the next quarter. I am nowhere to near having any practical experience when comes to architecuring. Already i have been maintaining two or three systems(they are more than 4yrs old.) for this prestegious client.The possiblity of the above mentioned tool will be manintained for another 3 yrs is very likely. From my past experience i've been through the pain of implmenting change requests to the rigd & undocumented code base resulting in the break down of the system and then eventually myself. So My main and top most concern is the maintainablity. When i was searching for these i came across this link, Smart Clients Using CAB and SCSF is the above link appropriate for my requirement? Also Should i place each automated processes in separate forms under a single project, or place them in separate projects under a single solution.. Please correct me if have missed any other important information. Thx.

    Read the article

  • Scheduling of tasks to a single resource using Prolog

    - by Reed Debaets
    I searched through here as best I could and though I found some relevant questions, I don't think they covered the question at hand: Assume a single resource and a known list of requests to schedule a task. Each request includes a start_after, start_by, expected_duration, and action. The goal is to schedule the tasks for execution as soon as possible while keeping each task scheduled between start_after and start_by. I coded up a simple prolog example that I "thought" should work but I've been unfortunately getting errors during run time: "=/2: Arguments are not sufficiently instantiated". Any help or advice would be greatly appreciated startAfter(1,0). startAfter(2,0). startAfter(3,0). startBy(1,100). startBy(2,500). startBy(3,300). duration(1,199). duration(2,199). duration(3,199). action(1,'noop1'). action(2,'noop2'). action(3,'noop3'). can_run(R,T) :- startAfter(R,TA),startBy(R,TB),T>=TA,T=<TB. conflicts(T,R1,T1) :- duration(R1,D1),T=<D1+T1,T>T1. schedule(R1,T1,R2,T2,R3,T3) :- can_run(R1,T1),\+conflicts(T1,R2,T2),\+conflicts(T1,R3,T3), can_run(R2,T2),\+conflicts(T2,R1,T1),\+conflicts(T2,R3,T3), can_run(R3,T3),\+conflicts(T3,R1,T1),\+conflicts(T3,R2,T2). % when traced I *should* see T1=0, T2=400, T3=200 Edit: conflicts goal wasn't quite right: needed extra TT1 clause. Edit: Apparently my schedule goal works if I supply valid Request,Time pairs ... but I'm stucking trying to force prolog to find valid values for T1..3 when given R1..3?

    Read the article

  • Simple ASP.NET MVC views without writing a controller

    - by Jake Stevenson
    We're building a site that will have very minimal code, it's mostly just going to be a bunch of static pages served up. I know over time that will change and we'll want to swap in more dynamic information, so I've decided to go ahead and build a web application using ASP.NET MVC2 and the Spark view engine. There will be a couple of controllers that will have to do actual work (like in the /products area), but most of it will be static. I want my designer to be able to build and modify the site without having to ask me to write a new controller or route every time they decide to add or move a page. So if he wants to add a "http://mysite.com/News" page he can just create a "News" folder under Views and put an index.spark page within it. Then later if he decides he wants a /News/Community page, he can drop a community.spark file within that folder and have it work. I'm able to have a view without a specific action by making my controllers override HandleUnknownAction, but I still have to create a controller for each of these folders. It seems silly to have to add an empty controller and recompile every time they decide to add an area to the site. Is there any way to make this easier, so I only have to write a controller and recompile if there's actual logic to be done? Some sort of "master" controller that will handle any requests where there was no specific controller defined?

    Read the article

  • jQuery date picker not persistant after AJAX

    - by ILMV
    So I'm using the jQuery date picker, and it works well. I am using AJAX to go and get some content, obviously when this new content is applied the bind is lost, I learnt about this last week and discovered about the .live() method. But how do I apply that to my date picker? Because this isn't an event therefore .live() won't be able to help... right? This is the code I'm using to bind the date picker to my input: $(".datefield").datepicker({showAnim:'fadeIn',dateFormat:'dd/mm/yy',changeMonth:true,changeYear:true}); I do not want to call this metho everytime my AJAX fires, as I want to keep that as generic as possible. Cheers :-) EDIT As @nick requested, below is my wrapper function got the ajax() method: var ajax_count = 0; function getElementContents(options) { if(options.type===null) { options.type="GET"; } if(options.data===null) { options.data={}; } if(options.url===null) { options.url='/'; } if(options.cache===null) { options.cace=false; } if(options.highlight===null || options.highlight===true) { options.highlight=true; } else { options.highlight=false; } $.ajax({ type: options.type, url: options.url, data: options.data, beforeSend: function() { /* if this is the first ajax call, block the screen */ if(++ajax_count==1) { $.blockUI({message:'Loading data, please wait'}); } }, success: function(responseText) { /* we want to perform different methods of assignment depending on the element type */ if($(options.target).is("input")) { $(options.target).val(responseText); } else { $(options.target).html(responseText); } /* fire change, fire highlight effect... only id highlight==true */ if(options.highlight===true) { $(options.target).trigger("change").effect("highlight",{},2000); } }, complete: function () { /* if all ajax requests have completed, unblock screen */ if(--ajax_count===0) { $.unblockUI(); } }, cache: options.cache, dataType: "html" }); } What about this solution, I have a rules.js which include all my initial bindings with the elements, if I were to put these in a function, then call that function on the success callback of the ajax method, that way I wouldn't be repeating code... Hmmm, thoughts please :D

    Read the article

  • .NET TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • SQL Server Blocking Issue

    - by Robin Weston
    We currently have an issue that occurs roughly once a day on SQL 2005 database server, although the time it happens is not consistent. Basically, the database grinds to a halt, and starts refusing connections with the following error message. This includes logging into SSMS: A connection was successfully established with the server, but then an error occurred during the login process. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) Our CPU usage for SQL is usually around 15%, but when the DB is in it's broken state it's around 70%, so it's clearly doing something, even if no-one can connect. Even if I disable the web app that uses the database the CPU still doesn't go down. I am unable to restart the SQLSERVER process as it is unresponsive, so I have to end up killing the process manually, which then puts the DB into Suspect/Recovery mode (which I can fix but it's a pain). Below are some PerfMon stats I gathered when the DB was in it's broken state which might help. I have a bunch more if people want to request them: Active Transactions: 2 (Never Changes) Logical Connections: 34 (NC) Process Blocked: 16 (NC) User Connections: 30 (NC) Batch Request: 0 (NC) Active Jobs: 2 (NC) Log Truncations: 596 (NC) Log Shrinks: 24 (NC) Longest Running Transaction Time: 99 (NC) I guess they key is finding out what the DB is using it's CPU on, but as I can't even log into SSMS this isn't possible with the standard methods. Disturbingly, I can't even use the dedicated admin connection to get into SSMS. I get the same timout as with all other requests. Any advice, reccomendations, or even sympathy, is much appreciated!

    Read the article

  • Publishing artifacts with sources on archiva

    - by Palimondo
    At work I'm dipping my toes in managing project dependencies with maven. We use Apache Archiva (1.2.1) as a local repository and proxy. I'm adding artifact for open source project, that is not published on any public repository. I've learned that to publish the sources I should use the Classifier field on Upload artifact page. The sources are then listed alongside the jar and pom when I browse the repository. But when I update my maven dependencies I get only the jar and pom from the repository. I noticed that sources are also missing when the archiva proxies for me the downloads from other public repositories. I didn't find any configuration options in Archiva's admin pages to serve the sources... What am I missing? Update: I was missing the fact that artifact sources have to be downloaded manually. I.e. the maven client has to request them, which is controlled by command line option -DdownloadSources=true. Maven Integration for Eclipse has a preference setting to always download them as described in Resolving artifact sources. Archiva then serves the sources for local artifacts or proxies the request to remote repositories and caches the sources for future requests.

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >