Search Results

Search found 8056 results on 323 pages for 'external'.

Page 289/323 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Make All Types Constant by Default in C++

    - by Jon Purdy
    What is the simplest and least obtrusive way to indicate to the compiler, whether by means of compiler options, #defines, typedefs, or templates, that every time I say T, I really mean T const? I would prefer not to make use of an external preprocessor. Since I don't use the mutable keyword, that would be acceptable to repurpose to indicate mutable state. Potential (suboptimal) solutions so far: // I presume redefinition of keywords is implementation-defined or illegal. #define int int const #define ptr * const int i(0); int ptr j(&i); typedef int const Int; typedef int const* const Intp; Int i(0); Intp j(&i); template<class T> struct C { typedef T const type; typedef T const* const ptr; }; C<int>::type i(0); C<int>::ptr j(&i);

    Read the article

  • On-Demand Python Thread Start/Join Freezing Up from wxPython GUI

    - by HokieTux
    I'm attempting to build a very simple wxPython GUI that monitors and displays external data. There is a button that turns the monitoring on/off. When monitoring is turned on, the GUI updates a couple of wx StaticLabels with real-time data. When monitoring is turned off, the GUI idles. The way I tried to build it was with a fairly simple Python Thread layout. When the 'Start Monitoring' button is clicked, the program spawns a thread that updates the labels with real-time information. When the 'Stop Monitoring' button is clicked, thread.join() is called, and it should stop. The start function works and the real-time data updating works great, but when I click 'Stop', the whole program freezes. I'm running this on Windows 7 64-bit, so I get the usual "This Program has Stopped Responding" Windows dialog. Here is the relevant code: class MonGUI(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent) ... ... other code for the GUI here ... ... # Create the thread that will update the VFO information self.monThread = Thread(None, target=self.monThreadWork) self.monThread.daemon = True self.runThread = False def monThreadWork(self): while self.runThread: ... ... Update the StaticLabels with info ... (This part working) ... # Turn monitoring on/off when the button is pressed. def OnClick(self, event): if self.isMonitoring: self.button.SetLabel("Start Monitoring") self.isMonitoring = False self.runThread = False self.monThread.join() else: self.button.SetLabel("Stop Monitoring") self.isMonitoring = True # Start the monitor thread! self.runThread = True self.monThread.start() I'm sure there is a better way to do this, but I'm fairly new to GUI programming and Python threads, and this was the first thing I came up with. So, why does clicking the button to stop the thread make the whole thing freeze up?

    Read the article

  • visibility false, XML, as3

    - by pixelGreaser
    Hey, I'm using external XML to set flash vars. Alpha works, but not Visibility. How do I get my swf to respond to visibility? Thanks. XML <?xml version="1.0" encoding="utf-8"?> <SESSION> <BGv TITLE="visible true">false</BGv> <BGa TITLE="alpha 50 percent">.5</BGa> </SESSION> SWF //LISTEN AND LOAD XML var myXML:*; var myLoad:URLLoader = new URLLoader(); myLoad.load(new URLRequest("visible.xml")); myLoad.addEventListener(Event.COMPLETE, parseXML); //PARSE XML function parseXML(e:Event):void { myXML = new XML(e.target.data); //MY TEST var bgA:*; var bgV:*; trace(myXML.BGa.text()); trace(myXML.BGv.text()); bgA =(myXML.BGa.text()); bgV =(myXML.BGv.text()); //MY OBJECT bg.alpha = bgA;//This works great bg.visible = bgV;//This has no effect } OUTPUT .5 false

    Read the article

  • How to convert a string to variable name

    - by p1xelarchitect
    I'm loading several external JSON files and want to check if they have been successfully cached before the the next screen is displayed. My strategy is to check the name of the array to see if it an object. My code works perfectly when I check only a single array and hard code the array name into my function. My question is: how can i make this dynamic? not dynamic: (this works) $("document").ready(){ checkJSON("nav_items"); } function checkJSON(){ if(typeof nav_items == "object"){ // success... } } dynamic: (this doesn't work) $("document").ready(){ checkJSON("nav_items"); checkJSON("foo_items"); checkJSON("bar_items"); } function checkJSON(item){ if(typeof item == "object"){ // success... } } here is the a larger context of my code: var loadAttempts = 0; var reloadTimer = false; $("document").ready(){ checkJSON("nav_items"); } function checkJSON(item){ loadAttempts++; //if nav_items exists if(typeof nav_items == "object"){ //if a timer is running, kill it if(reloadTimer != false){ clearInterval(reloadTimer); reloadTimer = false; } console.log("found!!"); console.log(nav_items[1]); loadAttempts = 0; //reset // load next screen.... } //if nav_items does not yet exist, try 5 times, then give up! else { //set a timer if(reloadTimer == false){ reloadTimer = setInterval(function(){checkJSON(nav_items)},300); console.log(item + " not found. Attempts: " + loadAttempts ); } else { if(loadAttempts <= 5){ console.log(item + " not found. Attempts: " + loadAttempts ); } else { clearInterval(reloadTimer); reloadTimer = false; console.log("Giving up on " + item + "!!"); } } } }

    Read the article

  • Cleaner method for list comprehension clean-up

    - by Dan McGrath
    This relates to my previous question: Converting from nested lists to a delimited string I have an external service that sends data to us in a delimited string format. It is lists of items, up to 3 levels deep. Level 1 is delimited by '|'. Level 2 is delimited by ';' and level 3 is delimited by ','. Each level or element can have 0 or more items. An simplified example is: a,b;c,d|e||f,g|h;; We have a function that converts this to nested lists which is how it is manipulated in Python. def dyn_to_lists(dyn): return [[[c for c in b.split(',')] for b in a.split(';')] for a in dyn.split('|')] For the example above, this function results in the following: >>> dyn = "a,b;c,d|e||f,g|h;;" >>> print (dyn_to_lists(dyn)) [[['a', 'b'], ['c', 'd']], [['e']], [['']], [['f', 'g']], [['h'], [''], ['']]] For lists, at any level, with only one item, we want it as a scalar rather than a 1 item list. For lists that are empty, we want them as just an empty string. I've came up with this function, which does work: def dyn_to_min_lists(dyn): def compress(x): return "" if len(x) == 0 else x if len(x) != 1 else x[0] return compress([compress([compress([item for item in mv.split(',')]) for mv in attr.split(';')]) for attr in dyn.split('|')]) Using this function and using the example above, it returns: [[['a', 'b'], ['c', 'd']], 'e', '', ['f', 'g'], ['h', '', '']] Being new to Python, I'm not confident this is the best way to do it. Are there any cleaner ways to handle this? This will potentially have large amounts of data passing through it, are there any more efficient/scalable ways to achieve this?

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • C# Type comparison

    - by Sean.C
    This has me pooped, is there any reason the following: public abstract class aExtension { public abstract bool LoadExtension(Constants c); // method required in inherit public abstract string AppliesToModule // property required in inherit { get; } public abstract string ExtensionName // property required in inherit { get; } public abstract string ExtensionDescription // property required in inherit { get; } } public class UK : aExtension { public override bool LoadExtension(Constants c) { return true; } public override string AppliesToModule { get { return "string"; } } public override string ExtensionName { get { return "string"; } } public override string ExtensionDescription { get { return "string"; } } } would return false for the following expressions: bool a = t.IsAssignableFrom(aExtension)); bool b = t.BaseType.IsAssignableFrom(aExtension)); bool c = typeof(aExtension).IsAssignableFrom(t); bool d = typeof(aExtension).IsAssignableFrom(t.BaseType); bool e = typeof(aExtension).IsSubclassOf(t); bool f = typeof(aExtension).IsSubclassOf(t.BaseType); bool g = t.IsSubclassOf(typeof(aExtension)); bool h = t.BaseType.IsSubclassOf(typeof(LBT.AdMeter.aExtension)); bool i = t.BaseType.Equals(typeof(aExtension)); bool j = typeof(aExtension).Equals(t.BaseType); T is the reflected Type from the calss UK. Stange thing is i do the exact same thing just on an external assembly in the same application and it works as expected...

    Read the article

  • Can anyone give me a sample java socket programming for doing a peer to peer for 3 systems?

    - by Sadesh Kumar N
    I am doing an university project. I need some sample programs on peer to peer programs in java socket programming. Every where people are telling to add a server socket in the client program. I am in a confusion. Can a single program having server socket and client socket will do or i have to create two programs of one initiating a system and another peer program running thrice to solve the problem. or i need to create three programs for three peer systems. I am not clear on the architecture of building peer to peer programs using java sockets. Can some one help me giving a simple program on how to create a peer to peer connection between three systems. I know how to do a socket program for client server model and clear on the concept. But creating a peer to peer architecture sounds complex for me to understand. I also referred this thread. developing peer to peer in java The person commented second says" To make peer2peer app each client opens server socket too. When client A wishes to connect to client B it just connects to its socket. " Need some more sample and an explanation on how peer to peer java socket program works I dont want any external api like jxta to do this task. I need a clear picture on how it works alone with an example.

    Read the article

  • Magento - save quote_address on cart addAction using observer

    - by Byron
    I am writing a custom Magento module that gives rate quotes (based on an external API) on the product page. After I get the response on the product page, it adds some of the info from the response into the product view's form. The goal is to save the address (as well as some other things, but those can be in the session for now). The address needs to be saved to the quote, so that the checkout (onestepcheckout, free version) will automatically fill those values in (city, state, zip, country, shipping method [of which there are 3]) and load the rate quote via ajax (which it's already doing). I have gone about this by using an Observer, and watching for the cart events. I fill in the address and save it. When I end up on the cart page or checkout page ~4/5 times the data is lost, and the sql table shows that the quote_address is getting save with no address info, even though there is an explicit save. The observer method's I've used are: checkout_cart_update_item_complete checkout_cart_product_add_after The code saving is: (I've tried this with the address, quote and cart all not calling save() with the same results, as well as not calling setQuote() $quote->getShippingAddress() ->setCountryId($params['estimate_to_country']) ->setCity($params['estimate_to_city']) ->setPostcode($params['estimate_to_zip_code']) ->setRegionId($params['estimate_to_state_code']) ->setRegion($params['estimate_to_state']) ->setCollectShippingRates(true) ->setShippingMethod($params['carrier_method']) ->setQuote($quote) ->save(); $quote->getBillingAddress() ->setCountryId($params['estimate_to_country']) ->setCity($params['estimate_to_city']) ->setPostcode($params['estimate_to_zip_code']) ->setRegionId($params['estimate_to_state_code']) ->setRegion($params['estimate_to_state']) ->setCollectShippingRates(true) ->setShippingMethod($params['carrier_method']) ->setQuote($quote) ->save(); $quote->save(); $cart->save(); I imagine there's a good chance this is not the best place to save this info, but I am not quite sure. I was wondering if there is a good place to do this without overriding a whole controller to save the address on the add to cart action. Thanks so much, let me know if I need to clarify

    Read the article

  • ObjectDataSource firing twice, or on its own

    - by LoveMeSomeCode
    Can someone explain exactly how/when an ObjectDataSource fires? I have an ASP.NET page, with a GridView, which is referencing an ODS. I put a breakpoint in the method the ODS is using, and noticed it was firing twice. I looked into the code and the answer seemed obvious at first. I had Page_Load() { if(!Page.IsPostBack) { MethodA(); MethodB(); } } where MethodA and MethodB were both eventually calling gv.DataBind(). This made sense because I assume that each call to GridView.DataBind() would result in asking the ODS for data, and therefore running my data access method. The weird thing is that when comment out the call to MethodA, it still fires twice. Checking the call stack shows the method being run first as a result of MethodB, and then again, with no trail except [External Code]. This mystery load does not happen when I let MethodA and MethodB both execute. Any idea what's going on here? Any idea what other code I might have that is asking the ODS for data? I'm starting to think all these 'no code' data controls are more obfuscation and BS than they're worth.

    Read the article

  • How to check if a thread is busy in C#?

    - by Sam
    I have a Windows Forms UI running on a thread, Thread1. I have another thread, Thread2, that gets tons of data via external events that needs to update the Windows UI. (It actually updates multiple UI threads.) I have a third thread, Thread3, that I use as a buffer thread between Thread1 and Thread2 so that Thread2 can continue to update other threads (via the same method). My buffer thread, Thread3, looks like this: public class ThreadBuffer { public ThreadBuffer(frmUI form, CustomArgs e) { form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); } } What I would like to do is for my ThreadBuffer to check whether my form is currently busy doing previous updates. If it is, I'd like for it to wait until it frees up and then invoke the UpdateUI(e). I was thinking about either: a) //PseudoCode while(form==busy) { // Do nothing; } form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); How would I check the form==busy? Also, I am not sure that this is a good approach. b) Create an event in form1 that will notify the ThreadBuffer that it is ready to process. // psuedocode List<CustomArgs> elist = new List<CustomArgs>(); public ThreadBuffer(frmUI form, CustomArgs e) { from.OnFreedUp += from_OnFreedUp(); elist.Add(e); } private form_OnFreedUp() { if (elist.count == 0) return; form.Invoke((MethodInvoker)delegate { form.UpdateUI(elist[0]); }); elist.Remove(elist[0]); } In this case, how would I write an event that will notify that the form is free? and c) an other ideas?

    Read the article

  • Batch file recursively find files and rar them

    - by b1gf00t
    Hi there, I have a Parent Directory which hosts many sub directories, and in every sub directory there is .mpg movies. Some of the directories might contain one or more .mpg movies. I would like to automate the process below, which I have been doing manually. Step One If the directory has more than 1 .mpg file, I create separates directories for each and move each file into its directory, naming the directory as per the name of the file. Step Two I rar each video file in its directory as per one of my profiles, by that it splits the movie into 50MB parts, test the archive, delete the source, and instructs winrar to wait if another rar is executing. I am doing this so I can queue jobs manually. Step Three After having all the rars in the sub directories, I start creating a checksum for every directory, therefore leaving checksum.sfv in every directory. Step Four I copy the parent folder and its sub directories to my external drives. I was hoping that someone could assist me in creating a script. I was able to automate the process of creating directories as per the name of the file, and moving the file. However, I never succeeded in automating Step two. I am using the below software Winrar from rarlabs exf from exactfile Appreciate your assistance.

    Read the article

  • Data Integration Solution?

    - by Shlomo
    At my company we have a number of data feeds and processing that run on any given day. The number of feeds and processing steps is starting to out-number the ability to manage it ad-hoc as it is managed currently. Is there a good solution that helps with logging and managing/scheduling dependencies? For example: A: When file x is FTP dropped into directory D1, kick off processing step B B: Load flat file into DB1 C: When file y is FTP dropped into directory D2, kick off processing Step D D: Load flat file into DB11 E: When B and D are done, churn through the data, and load new data into DB111. F: When Step E is done, launch application process P G: etc... I want those steps to run at the appropriate times, not to mention if B fails, there's no reason to run steps E & F, but I could still run C & D. When I re-run B successfully, it should trigger just E & F to re-run, not C & D. We're a .NET/C#/Sql Server shop, and I'm already familiar with SSIS. Is that really the best there is? That manages steps well, but not external dependencies, or logging. Open source (.NET) preferred, but not required.

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • PDF reviewer in C# (ASP.NET/Silverlight?)

    - by Anders Holmström
    Hi. I'm essentially planning to mimic the comment functions on PDF files, but online. That is; a user should be able to log in and upload a PDF file, and then numerous different users should be able to add comments etc to this same file (and view the file, with comments, online). External libraries are ok. Free obviously preferred, but commercial ones are fine if they provide a lot of the needed functionality. Note that this is meant to be used in a commercial environment. Comments don't necessarily need to be able to be exported from the site. I.e. if the comments are just put as a layer on top of a PDF file (and not in the actual file) that's ok. But obviously the more export functionality the better. I have looked at a few libraries (using the related questions and google) and while I find some that seem to do sort of what I want I'm not sure they are the bee's knees plus I would like to do as much myself as possible. The three basic approaches I've thought of is: Use some sort of native PDF viewing and then just smack down a layer on top of it where you can move around comments etc. Convert PDFs to HTML and work from there. Problem here it would either require proper PDFs (e.g. non-scanned) or really good OCR which seems a bit tedious. Convert PDFs to images and work from there. I'm afraid this will create massive images however. We're talking PDFs that can be hundreds of pages. One option would of course to just display one PDF page (image) at a time. And last - should I look at Silverlight for this or go with ASP.NET? Ideas and input concerning this project are much appreciated.

    Read the article

  • Google Anlytics event not firing

    - by yar1
    I am not getting and event data in GA. I installed Google Analytics Debugger Chrome extension and I see nothing happening (same goes when looking at Network panel in developer tools). I Googled it and read many (many) other answers and it looks like I'm doing things right. Page views, etc. are registering correctly... I have this code as the last thing before my closing tag: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-MYREALCODE', 'mybna.net'); ga('send', 'pageview'); var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-MYREALCODE']); _gaq.push(['_trackPageview']); </script> My event handlers are done using jQuery, all inside an external js file, loaded before the closing tag. For example: $(function () { $('#show-less').click(function (e) { pbr.showHideMore(e); _gaq.push(['_trackEvent', 'ShowMore', 'Hide', 'top button']); }); }); Any ideas anyone?

    Read the article

  • Java - multithreaded access to a local value store which is periodically cleared

    - by Telax
    I'm hoping for some advice or suggestions on how best to handle multi threaded access to a value store. My local value storage is designed to hold onto objects which are currently in use. If the object is not in use then it is removed from the store. A value is pumped into my store via thread1, its entry into the store is announced to listeners, and the value is stored. Values coming in on thread1 will either be totally new values or updates for existing values. A timer is used to periodically remove any value from the store which is not currently in use and so all that remains of this value is its ID held locally by an intermediary. Now, an active element on thread2 may wake up and try to access a set of values by passing a set of value IDs which it knows about. Some values will be stored already (great) and some may not (sadface). Those values which are not already stored will be retrieved from an external source. My main issue is that items which have not already been stored and are currently being queried for may arrive in on thread1 before the query is complete. I'd like to try and avoid locking access to the store whilst a query is being made as it may take some time.

    Read the article

  • PHP is truncating MSSQL Blob data (4096b), even after setting INI values. Am I missing one?

    - by Dutchie432
    I am writing a PHP script that goes through a table and extracts the varbinary(max) blob data from each record into an external file. The code is working perfectly, except when a file is over 4096b - the data is truncated at exactly 4096. I've modified the values for mssql.textlimit, mssql.textsize, and odbc.defaultlrl without any success. Am I missing something here? <?php ini_set("mssql.textlimit" , "2147483647"); ini_set("mssql.textsize" , "2147483647"); ini_set("odbc.defaultlrl", "0"); include_once('common.php'); $id=$_REQUEST['i']; $q = odbc_exec($connect, "Select id,filename,documentBin from Projectdocuments where id = $id"); if (odbc_fetch_row($q)){ echo "Trying $filename ... "; $fileName="projectPhotos/docs/".odbc_result($q,"filename"); if (file_exists($fileName)){ unlink($fileName); } if($fh = fopen($fileName, "wb")) { $binData=odbc_result($q,"documentBin"); fwrite($fh, $binData) ; fclose($fh); $size = filesize($fileName); echo ("$fileName<br />Done ($size)<br><br>"); }else { echo ("$fileName Failed<br>"); } } ?> OUTPUT Trying ... projectPhotos/docs/file1.pdf Done (4096) Trying ... projectPhotos/docs/file2.zip Done (4096) Trying ... projectPhotos/docsv3.pdf Done (4096) etc..

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • Application crash

    - by Ovi
    I have an application that, as any other app, crashes once in a while for various reasons. When it crashes, it does it gracefully and the users get a nice message of the crash. At the same time the crash is reported on the server for analysis so it can be fixed in future versions. However, I would like that the app keeps working through the crash. What that means is that I would like to run the forms in an 'atomic' way. If it goes down, it doesn't take down the entire app. The users should just need to start over the work done with the particular form. Is this something that can be done through architecture? Or maybe the new framework versions has something to aid this? The application is build mostly in C# over the 3.5 framework, but it also uses some external references, some COMs and web service references. I am not interested in an answer: 'well fix the crashes'. Me and my team and the testing team are working round the clock for this.

    Read the article

  • Javascript - Wait for function to return

    - by LoadData
    So, I am working on a project which requires me to call upon a function to get data from an external source. The issue I am having, I call upon the function - However the code after the function call is continuing before the function has returned a value. Here is the function - function getData() { var myVar; var xmlLoc = $.get("http://ec.urbentom.co.uk/StudentAppData.xml", function(data) { $xml = $(data); myVar = $xml; console.log(myVar); console.log(String($xml)); localStorage.setItem("Data", $xml); console.log(String(localStorage.getItem("Data"))); return myVar; }); return myVar; console.log("Does this continue"); } And here is where it is called upon - $(document).on("pageshow","#Information",function() { $xml = $(getData()); //Here is the function call console.log($xml); //However, it will instantly go to this line before 'getData' has returned a value. $xml.find('AllData').each(function() { $(this).find('item').each(function() { if ($(this).find('Category').text()=="Facilities") { console.log($(this).find('Title').text()); //Do stuff here } else if ($(this).find('Category').text()=="Contacts" || $(this).find('Category').text()=="Information") { console.log($(this).find('Title').text()); //Do stuff here too } }); $('#informationList').html(output).listview().listview("refresh"); console.log("Finished"); }); }); Right now, I'm unsure of why it is not working. My guess is that it is because I am calling a function within a function. Does anyone have any ideas on how this issue can be fixed?

    Read the article

  • Populate an Object Model from a data dataTable(C#3.0)

    - by Newbie
    I have a situation I am getting data from some external sources and is populating into the datatable. The data looks like this DATE WEEK FACTOR 3/26/2010 1 RM_GLOBAL_EQUITY 3/26/2010 1 RM_GLOBAL_GROWTH 3/26/2010 2 RM_GLOBAL_VALUE 3/26/2010 2 RM_GLOBAL_SIZE 3/26/2010 2 RM_GLOBAL_MOMENTUM 3/26/2010 3 RM_GLOBAL_HIST_BETA I have a object model like this public class FactorReturn { public int WeekNo { get; set; } public DateTime WeekDate { get; set; } public Dictionary<string, decimal> FactorCollection { get; set; } } As can be seen that the Date field is always constant. And a single(means unique) week can have multiple FACTORS. i.e. For a date(3/26/2010), for Week No. 1, there are two FACTORS(RM_GLOBAL_EQUITY and RM_GLOBAL_GROWTH). Similarly, For a date(3/26/2010), for Week No. 2, there are three FACTORS(RM_GLOBAL_VALUE , RM_GLOBAL_SIZE and RM_GLOBAL_MOMENTUM ). Now we need to populate this data into our object model. The final output will be WeekDate: 3/26/2010 WeekNo : 1 FactorCollection : RM_GLOBAL_EQUITY FactorCollection : RM_GLOBAL_GROWTH WeekNo : 2 FactorCollection : RM_GLOBAL_VALUE FactorCollection : RM_GLOBAL_SIZE FactorCollection : RM_GLOBAL_MOMENTUM WeekNo : 3 FactorCollection : RM_GLOBAL_HIST_BETA That is, overall only 1 single collection, where the Factor type will vary depending on week numbers. I have tried but of useless. Nothing works. Could you please help me?. I feel it is very tough I am using C# 3.0 Thanks

    Read the article

  • VB.Net Memory Issue

    - by Skulmuk
    We have an application that has some interesting memory usage issues. When it first opens, the program uses aroun 50-60MB of memory. This stays consistent on 32-bit machines. On 64-bit machines, however, re-activating the form in any way (clicking, dragging, alt-tabbing, etc.) adds around another 50MB to it's memory usage. It repeats this process several times before resetting back to around 45MB, at which point the cycle begins again. I've done some research and a lot of people have said that VB in general has pretty poor garbage collection, which could be affecting the software in some way. However, I've yet to find a solution. There are no events fired when the application is activated (as shown by 32-bit usage) - the applications is merely sitting awaiting the user's actions. At load, the system pulls some data into a tree view, but that's the only external connection, and it only re-fires the routine when the user makes a change to something and saves the change. Has anyone else experienced anything this strange, and if so, does anyone know of what might fix it? It seems strange that it only occurs under x64 systems. Thanks

    Read the article

  • Travelling software. Is that a concept?

    - by Bubba88
    Hi! This is barely a sensible question. I would like to ask if there existed a program, which were intended to travel (for example following some physical forces) across the planet, possibly occupying and freeing computational resources/nodes. Literally that means that some agent-based system is just regularly changing it's location and (inevitably to some extent) configuration. An example would be: suppose you have external sensors, and free computers - nodes - across the space; would it make sense to self-replicate agents to follow the initializers from sensors, but in such restrictive manner that the computation is only localized at where the physical business is going on. I want to stress that this question is just for 'theoretical' fun, cause I cannot see any practical benefits of the restrictions mentioned, apart from the optimization of 'outdated' (outplaced?) agent disposal. But maybe it could be of some interest. Thank you! EDIT: It's obvious that a virus is fitting example, although the deletion of such agents is rarely of concern of the developers. More precisely, I'm interested in 'travelling' software - that is, when the count (or at least order) of the agents is kind of constant, and it's just the whole system who travels.

    Read the article

  • recursive wget with hotlinked requisites

    - by dongle
    I often use wget to mirror very large websites. Sites that contain hotlinked content (be it images, video, css, js) pose a problem, as I seem unable to specify that I would like wget to grab page requisites that are on other hosts, without having the crawl also follow hyperlinks to other hosts. For example, let's look at this page https://dl.dropbox.com/u/11471672/wget-all-the-things.html Let's pretend that this is a large site that I would like to completely mirror, including all page requisites – including those that are hotlinked. wget -e robots=off -r -l inf -pk ^^ gets everything but the hotlinked image wget -e robots=off -r -l inf -pk -H ^^ gets everything, including hotlinked image, but goes wildly out of control, proceeding to download the entire web wget -e robots=off -r -l inf -pk -H --ignore-tags=a ^^ gets the first page, including both hotlinked and local image, does not follow the hyperlink to the site outside of scope, but obviously also does not follow the hyperlink to the next page of the site. I know that there are various other tools and methods of accomplishing this (HTTrack and Heritrix allow for the user to make a distinction between hotlinked content on other hosts vs hyperlinks to other hosts) but I'd like to see if this is possible with wget. Ideally this would not be done in post-processing, as I would like the external content, requests, and headers to be included in the WARC file I'm outputting.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >