Search Results

Search found 15535 results on 622 pages for 'mat keep'.

Page 518/622 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • XMLHttpRequest leak

    - by Raja
    Hi everyone, Below is my javascript code snippet. Its not running as expected, please help me with this. <script type="text/javascript"> function getCurrentLocation() { console.log("inside location"); navigator.geolocation.getCurrentPosition(function(position) { insert_coord(new google.maps.LatLng(position.coords.latitude,position.coords.longitude)); }); } function insert_coord(loc) { var request = new XMLHttpRequest(); request.open("POST","start.php",true); request.onreadystatechange = function() { callback(request); }; request.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); request.send("lat=" + encodeURIComponent(loc.lat()) + "&lng=" + encodeURIComponent(loc.lng())); return request; } function callback(req) { console.log("inside callback"); if(req.readyState == 4) if(req.status == 200) { document.getElementById("scratch").innerHTML = "callback success"; //window.setTimeout("getCurrentLocation()",5000); setTimeout(getCurrentLocation,5000); } } getCurrentLocation(); //called on body load </script> What i'm trying to achieve is to send my current location to the php page every 5 seconds or so. i can see few of the coordinates in my database but after sometime it gets weird. Firebug show very weird logs like simultaneous POST's at irregular intervals. Here's the firebug screenshot: IS there a leak in the program. please help. EDIT: The expected outcome in the firebug console should be like this :- inside location POST .... inside callback /* 5 secs later */ inside location POST ... inside callback /* keep repeating */

    Read the article

  • What is the most common way to use a middleware in node with express and connect

    - by Bernhard
    Thinking about the correct way, how to make use of middlewares in a node.js web project using express and connect which is growing up at the moment. Of course there are middlewares right now wich has to pass or extend requests globally but in a lot of cases there are special jobs like prepare incoming data and in this case the middleware would only work for a set of http-methods and routes. I've a component based architecture and each component brings it's own middleware layer which can implement those for requests this component can handle. On app startup any required component is loaded and prepared. Is it a good idea to bind the middleware code execution to URLs to keep cpu load lower or is it better to use middlewares only for global purposes? Here's some dummy how an url related middleware look like. app.use(function(req, res, next) { // Check if requested route is a part of the current component // or if the middleware should be passed on any request if (APP.controller.groups.Component.isExpectedRoute(req) || APP.controller.groups.Component.getConfig().MIDDLEWARE_PASS_ALL === true) { // Execute the midleware code here console.log('This is a route which should be afected by middleware'); ... next(); }else{ next(); } });

    Read the article

  • Creating an object in the loop

    - by Jacob
    std::vector<double> C(4); for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } is much faster than for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { std::vector<double> C(4); C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } I realize this happens because std::vector is repeatedly being created and instantiated in the loop, but I was under the impression this would be optimized away. Is it completely wrong to keep variables local in a loop whenever possible? I was under the (perhaps false) impression that this would provide optimization opportunities for the compiler. EDIT: I use VC++2005 (release mode) with full optimization (/Ox)

    Read the article

  • Fast (de)serialization on iPhone

    - by Jacob Kuypers
    I'm developing a game/engine for iPhone OS. It's the first time I'm using Objective-C. I made my own binary format for geometry data and for textures I'm focusing on PVRTC. That should be the optimal approach as far as speed and space are concerned. I really want to keep loading time to a minimum and - if possible - be able to save very fast as well. So now I'm trying to make my "Entity" stuff persistent without sacrificing performance. First I wanted to use NSKeyedArchiver. From what I've heard, it's not very fast. Also, what I want to serialize is mostly structs made of floats with some ints and strings, so there isn't really a need for all that "object graph" overhead. NSArchiver would have been more appropriate, but they kicked that off the iphone for some reason. So now I'm thinking about making my own serialization scheme again. Am I wrong in thinking that NSKeyedArchiver is slow (I only read that, haven't tested it myself)? If so, what's the best way to encode/decode structs (with no pointers, mostly floats) without sacrificing speed?

    Read the article

  • C# Multidimensional Array Definition

    - by Blaenk
    Can someone help me convert this to C#. I've already spent more time than I would have liked trying to do it myself and it's preventing me from actually getting any work done. I guess it seems like C# has a limitation regarding how one can define arrays. I think somewhere inside I have to keep doing new int[] but I'm not sure exactly where. You don't have to convert the whole thing, just enough so I can understand how to do it. I would really appreciate it. I would like to use integers instead of characters, by the way. Thanks again // Pieces definition char mArray [7][4][5][5] = { // Square { { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 2, 1, 0}, {0, 0, 1, 1, 0}, {0, 0, 0, 0, 0} }, { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 2, 1, 0}, {0, 0, 1, 1, 0}, {0, 0, 0, 0, 0} }, { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 2, 1, 0}, {0, 0, 1, 1, 0}, {0, 0, 0, 0, 0} }, { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 2, 1, 0}, {0, 0, 1, 1, 0}, {0, 0, 0, 0, 0} } }, // and so on and so forth, for 6 more };

    Read the article

  • Can I attach data gathered by a form to a file that is being uploaded?

    - by Jacob
    I need customers to upload files to my website and I want to gather their name or company name and attach it to the file name or create a folder on the server with that as the name so we can keep the files organized. Using PHP to upload file PHP: if(isset($_POST['submit'])){ $target = "upload/"; $file_name = $_FILES['file']['name']; $tmp_dir = $_FILES ['file']['tmp_name']; try{ if(!preg_match('/(jpe?g|psd|ai|eps|zip|rar|tif?f|pdf)$/i', $file_name)) { throw new Exception("Wrong File Type"); exit; } move_uploaded_file($tmp_dir, $target . $file_name); $status = true; } catch (Exception $e) { $fail = true; } } Other PHPw/form: <form enctype="multipart/form-data" action="" method="post"> input type="hidden" name="MAX_FILE_SIZE" value="1073741824" /> label for="file">Choose File to Upload </label> <br />input name="file" type="file" id="file" size="50" maxlength="50" /><br /> input type="submit" name="submit" value="Upload" /> php if(isset($status)) { $yay = "alert-success"; echo "<div class=\"$yay\"> <br/> <h2>Thank You!</h2> <p>File Upload Successful!</p></div>"; } if(isset($fail)) { $boo = "alert-error"; echo "<div class=\"$boo\"> <br/> <h2>Sorry...</h2> <p>There was a problem uploading the file.</p><br/><p>Please make sure that you are trying to upload a file that is less than 50mb and an acceptable file type.</p></div>"; }

    Read the article

  • More efficient R / Sweave / TeXShop work-flow?

    - by user594795
    I've now got everything to work properly on my Mac OS X 10.6 machine so that I can create decent looking LaTeX documents with Sweave that include snippets of R code, output, and LaTeX formatting together. Unfortunately, I feel like my work-flow is a bit clunky and inefficient: Using TextWrangler, I write LaTeX code and R code (surrounded by <<= above and @ below R code chunk) together in one .Rnw file. After saving changes, I call the .Rnw file from R using the Sweave command Sweave(file="/Users/mymachine/Documents/Assign4.Rnw", syntax="SweaveSyntaxNoweb") In response, R outputs the following message: You can now run LaTeX on 'Assign4.tex' So then I find the .tex file (Assign4.tex) in the R directory and copy it over to the folder in my documents ~/Documents/ where the .Rnw file is sitting (to keep everything in one place). Then I open the .tex file (e.g. Assign4.tex) in TeXShop and compile it there into pdf format. It is only at this point that I get to see any changes I have made to the document and see if it 'looks nice'. Is there a way that I can compile everything with one button click? Specifically it would be nice to either call Sweave / R directly from TextWrangler or TeXShop. I suspect it might be possible to code a script in Terminal to do it, but I have no experience with Terminal. Please let me know if there's any other things I can do to streamline or improve my work flow.

    Read the article

  • C# adding list into list

    - by gencay
    I have a DocumentList.c as implemented below. And when I try to add a list into the instance of DocumentList object it adds but the others be the same class DocumentList { public static List wordList; public static string type; public static string path; public static double cos; public static double dice; public static double jaccard; //public static string title; public DocumentList(List wordListt, string typee, string pathh, double sm11, double sm22, double sm33) { type = typee; wordList = wordListt; path = pathh; cos = sm11; dice = sm22; jaccard = sm33; } } in main c#code fragment public partial class Window1 : System.Windows.Window { static private List documentList = new List(); ... in a method I use as below. DocumentList dt = new DocumentList(para1, para2, para3, para4, para5, para6); documentList.Add(dt); Now, When i add the first list it is ok it seems 1 item in documentList, but for the second one I get a list with 2 items but both the same.. I mean I cannot keep previous list item..

    Read the article

  • jQuery.data() works in Mac OS WebKit, but not on iPhone OS?

    - by rpj
    I'm playing around with jQTouch for an iPhone OS app that I've been toying with off and on for a while. I wanted to try my hand building it as a web app so I started playing with jQTouch. For reference, here is the page+source (all my code is currently in index.html so you can just "View Source" to see it all): http://rpj.me/doughapp.com/wd/ Essentially, I'm trying to save pertinent JSON objects retrieved from Google Local into DOM objects using the data() method (in this example, obj is the Google Local object): $('#locPane').data('selected', obj); then later (in a different "pane"), retrieving that object to be used: $('#locPane').bind('pageAnimationEnd', function(e, inf) { var selobj = $(this).data('selected'); // use 'selobj' here ... } In Chromium and Safari on the desktop OS (Snow Leopard in my case), this works perfectly (try it out). However, the same code returns undefined for the call to $(this).data('selected') in the second snippet above. I've also tried $('#' + e.target.id).data('selected') and even the naive $('#locPane').data('selected'). All variants return undefined in the iPhone OS version of WebKit, but not on the desktop. Interestingly, the running this on Mobile Safari in the iPhone Simulator fails as well. If you look at the full source, you'll see that I even try to save this object into my global jQTouch object (named jqt in my code). This, too, fails on the mobile platform. Has anyone else ever ran into this? I'll admit to not being a web/javascript programmer by trade, so if I'm making an idiot's error please call me out on it. Thank you in advance for the help! -RPJ Update: I didn't make it clear in the original post, but I'm open to any workaround if it works consistently. Since I'm having trouble storing these objects in general, anything that allows me to keep them around is good enough for now. Thanks!

    Read the article

  • JPA - Real primary key generated ID for references

    - by Val
    I have ~10 classes, each of them, have composite key, consist of 2-4 values. 1 of the classes is a main one (let's call it "Center") and related to other as one-to-one or one-to-many. Thinking about correct way of describing this in JPA I think I need to describe all the primary keys using @Embedded / @PrimaryKey annotations. Question #1: My concern is - does it mean that on the database level I will have # of additional columns in each table referring to the "Center" equal to number of column in "Center" PK? If yes, is it possible to avoid it by using some artificial unique key for references? Could you please give an idea how real PK and the artificial one needs to be described in this case? Note: The reason why I would like to keep the real PK and not just use the unique id as PK is - my application have some data loading functionality from external data sources and sometimes they may return records which I already have in local database. If unique ID will be used as PK - for new records I won't be able to do data update, since the unique ID will not be available for just downloaded ones. At the same time it is normal case scenario for application and it just need to update of insert new records depends on if the real composite primary key matches. Question #2: All of the 10 classes have common field "date" which I described in an abstract class which each of them extends. The "date" itself is never a key, but it always a part of composite key for each class. Composite key is different for each class. To be able to use this field as a part of PK should I describe it in each class or is there any way to use it as is? I experimented with @Embedded and @PrimaryKey annotations and always got an error that eclipselink can't find field described in an abstract class. Thank you in advance! PS. I'm using latest version of eclipselink & H2 database.

    Read the article

  • Where to store: User connection information?

    - by TomTom
    ;) I am writing a .NET application wher ethe user connects to a given server. ALl information within the application is stored in the server. But I want / need to store the following information for the user: The server he connected to last The username he used to connect last (and no, no password, never ever). Any idea where to store this best? the application config file is not sensible (user != admin, application.config is write protected for him). So, my options are: In the registry. 2 keys under my own subkey. In a sort of ini file, stored in the user's data directory (AppData). This would possibly also allow later expansion (into like saving more information, some of which may not fit into the registry). Anyone a tip? Other alternatives? I tend so far to go for the AppData directory with my own subfolder - simply because it is a nice preparation for later to keep like a local copy of configuration etc.

    Read the article

  • Simple wxPython Frame Contents Resizing - Ratio?

    - by Wes
    I have a wxPython app with one frame and one panel. On that panel are a number of static boxes, each of which has buttons and textboxes. I have just begun reading about sizers, but they seem like they might be more than what I need, or it could that they are exactly what I need but I don't know how to use them correctly! The frame currently opens at 1920 x 1080. If the user drags the bottom right corner to resize the app, I just want everything to get smaller or larger as needed to keep the same size ratio. Is this possible? Thank you! edit: additional info: I used wxPython 2.8 and Boa to construct the GUI. I am contemplating trying another gui ide. So after reading some more about sizers, I am thinking about doing the following: add a gridsizer and basically divide my window's elements into rows and columns, then set each row and column's size as necessary until I achieve the original layout. Then I guess set the rows and columns to resize correctly? Is this a decent idea?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Weird margin in a list

    - by kevin
    I'm trying to style a menu, but I keep running into this weird margin that's appearing in both FF4 and IE. This is the only affecting css: #header ul { display: inline; } #header ul li { list-style-type: none; background: #000; display: inline; margin: 0; padding: 0; } #header ul li a { color: #fff; text-decoration: none; display: inline-block; width: 100px; text-align: center; } And this is the HTML: <div id="header"> <ul id="toplinks"> <li><a href="#">Hello</a></li> <li><a href="#">Herp</a></li> <li><a href="#">Derp</a></li> </ul> </div> As you can see, there's a margin appearing on both sides, and I'd like it so it would have no margin (or maybe 1px would be okay)...

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • HTML input not working correctly with AJAX update panels used else where on page

    - by Sean P
    I have some update panels on my page that do some asyncpostbacks to keep some dropdownlists correctly populated. My problem is that on my page i have an HTML input that is handling some file uploads. With the AJAX on the page with asyncpostbacks, and while i step through my code behind, the files arent being uploaded. Using a postbacktrigger (non-async) is not possible because of my layout. Here is my code: <div id="divFileInputs" runat="server"> <input id="file1" name="fileInput" type="file" runat="server" size="50" style="width: 50em" onfocus="AddFileInput()" class="textbox" /></div> <select id="selectFileList" name="ListBox1" size="5" style="width: 50em; text-align: left;" class="textbox" /> <input id="RemoveAttachmentButton" type="button" value="Remove" onclick="RemoveFileInput()" class="removebutton " /> </div> Here is my code behind: Protected Sub CopyAttachments(ByVal issueId As String) Dim files As HttpFileCollection = Request.Files Dim myStream As System.IO.Stream Dim service As New SubmitService.Service For i As Integer = 0 To files.Count - 1 Dim postedFile As HttpPostedFile = files(i) Dim fileNameWithoutPath As String = System.IO.Path.GetFileName(postedFile.FileName) If fileNameWithoutPath.Length > 0 And issueId.Length > 0 Then Dim fileLength As Integer = postedFile.ContentLength Dim fileContents(fileLength) As Byte ' Read the file into the byte array. Send it to the web service. myStream = postedFile.InputStream myStream.Read(fileContents, 0, fileLength) service.ClearQuestAttachToIssue(issueId, fileNameWithoutPath, fileContents) End If Next service = Nothing End Sub When I put a breakpoint in at the declaration of service and then check the value of "files", the count is 0. I am expecting it to be 2 when i have one file uploaded. Anyone know how to fix this?

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • Code Analysis Warning CA1004 with generic method

    - by Vaccano
    I have the following generic method: // Load an object from the disk public static T DeserializeObject<T>(String filename) where T : class { XmlSerializer xmlSerializer = new XmlSerializer(typeof(T)); try { TextReader textReader = new StreamReader(filename); var result = (T)xmlSerializer.Deserialize(textReader); textReader.Close(); return result; } catch (FileNotFoundException) { } return null; } When I compile I get the following warning: CA1004 : Microsoft.Design : Consider a design where 'MiscHelpers.DeserializeObject(string)' doesn't require explicit type parameter 'T' in any call to it. I have considered this and I don't know a way to do what it requests with out limiting the types that can be deserialized. I freely admit that I might be missing an easy way to fix this. But if I am not, then is my only recourse to suppress this warning? I have a clean project with no warnings or messages. I would like to keep it that way. I guess I am asking "why this is a warning?" At best this seems like it should be a message. And even that seems a bit much. Either it can or it can't be fixed. If it can't then you are just stuck with the warning with no recourse but suppressing it. Am I wrong?

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • How can I prevent text displacement for some foreign language fonts?

    - by weltraumpirat
    I have a multilingual project (currently 13 languages), which uses many different font variations of "Helvetica Neue", mostly bold, condensed and regular cuts from the LinoType Pro font set ( which includes western european characters) and the same for cyrillic. We will probably add chinese and japanese variations in the future. I have set up the project to use different CSS stylesheets and separately load the fonts for each version, depending on which language the user selects, so I can have different line heights, kerning and/or font sizes to make everything keep the original look, even if the fonts look nothing alike. All of this works well, except for one problem: For some reason, all cyrillic letters seem to be displaced. They appear 2-3 pixels below the correct base line, and actually protrude across the textfield's bottom border, even when the field is set to autosize. When I use textfield.getCharBoundaries(), all values seem to be correct, even though they obviously aren't rendered correctly. To make everything look neat, I could of course manually move all problematic textfields up or down according to language and font size, but I was wondering if there was some way to prevent or at least detect this kind of displacement in order to automatically handle the adjustments - the Flash Player should have some sort of information on how things are rendered, shouldn't it? Have any of you had similar problems? Or better yet: a solution?

    Read the article

  • How to configure Server Topology for exposing an internal application for external access?

    - by ronaldwidha
    Hi All, I guess this question is bordering to a Server Fault question. I'd like to know the best configuration for exposing an internal application (in this case a load balanced Asp.Net MVC application) for external access. More details about the situation: The Asp.Net MVC Application is currently running on 2 servers The 2 servers are behind a Windows Network Load Balancer All the servers are on premise/internal network I'm thinking of introducing an F5 Load balancer on off premise DMZ to replace the Windows Network Load Balancer. F5 will act as the public traffic gateway and load balancers to the 2 servers. However, I'd like the internal users to not have go through the Internet to access the app. The idea that I have so far is to keep both Windows Network Load Balancer and the F5. Each appliance will have its own IP and will have its own domain name. External users can use the public domain name which will hit F5, whereas internal users can use the internal domain name which will hit the Windows Network Load Balancer. Is this a good idea? Or is there a better way of doing this?

    Read the article

  • EF4 + STE: Reattaching via a WCF Service? Using a new objectcontext each and every time?

    - by Martin
    Hi there, I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If i understnad this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case silverlight) The client then can change the entity or update it. At this point i believe it is self tracking???? This is where i sort of get a bit confused as there are a lot of reported problems with STEs not tracking.. Anyway... Then to update i just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT everytime? In every method? If i am creaitng a new objectcontext everytime in everymethod on my WCF then don't i need to re-attach the STE to the objectcontext? So basically this alone wouldn't work?? using(var ctx = new MyContext()) { ctx.Orders.ApplyChanges(order); ctx.SaveChanges(); } Or should i be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same wcf instance uses the same objectcontext? I could create and destroy the wcf service in each method call from the client - hence creating in effect a new objectcontext each time. I understand that it isn't a good idea to keep the objectcontext alive for very long. Any insight or information would be gratefully appreciated thanks

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • Cross Platform C library for GUI Apps?

    - by Moshe
    Free of charge, simple to learn/use, Cross Platform C library for GUI Apps? Am I looking for Qt? Bonus question: Can I develop with the said library/toolkit on Mac then recompile on PC/Linux? Super Bonus Question: Link to tutorial and/or download of said library. (RE)EDIT: The truth is that I'm in the process of catching up on the C family (coming from web development - XHTML/PHP/MySQL) to learn iPhone development. I do understand that C is not C++ or ObjectiveC but I want to keep the learning curve as simple as possible. Not to get too off topic, but I am also on the lookout for good starter books and websites. I've found this so far. I'm trying to kill many birds with one stone here. I don understand that there are platform specific extensions, but I will try to avoid those for porting purposes The idea is that I want to write the code on one machine and just compile thrice. (Mac/Win/Linux) If Objective C will compile on Windows and Linux as well as OS X then that's good. If I must use C++, that's also fine. EDIT: Link to QT Please...

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >