Search Results

Search found 69968 results on 2799 pages for 'real time updates'.

Page 370/2799 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • What is a practical use for a closure in JavaScript?

    - by alex
    I'm trying my hardest to wrap my head around JavaScript's closures. I get that by returning an inner function, it will have access to any variable defined in it's immediate parent. Where would this be useful to me? Perhaps I haven't quite got my head around it yet. Most of the examples I have seen online don't provide any real world code, just vague examples. Can someone show me a real world use of a closure? Is this one, for example? var warnUser = function (msg) { var calledCount = 0; return function() { calledCount++; alert(msg + '\nYou have been warned ' + calledCount + ' times.'); }; }; var warnForTamper = warnUser('You can not tamper with our HTML.'); warnForTamper(); warnForTamper(); Thanks

    Read the article

  • What Can A 'TreeDict' (Or Treemap) Be Used For In Practice?

    - by Seun Osewa
    I'm developing a 'TreeDict' class in Python. This is a basically a dict that allows you to retrieve its key-value pairs in sorted order, just like the Treemap collection class in Java. I've implemented some functionality based on the way unique indexes in relational databases can be used, e.g. functions to let you retrieve values corresponding to a range of keys, keys greater than, less than or equal to a particular value in sorted order, strings or tuples that have a specific prefix in sorted order, etc. Unfortunately, I can't think of any real life problem that will require a class like this. I suspect that the reason we don't have sorted dicts in Python is that in practice they aren't required often enough to be worth it, but I want to be proved wrong. Can you think of any specific applications of a 'TreeDict'? Any real life problem that would be best solved by this data structure? I just want to know for sure whether this is worth it.

    Read the article

  • MATLAB query about for loop, reading in data and plotting

    - by mp7
    Hi there, I am a complete novice at using matlab and am trying to work out if there is a way of optimising my code. Essentially I have data from model outputs and I need to plot them using matlab. In addition I have reference data (with 95% confidence intervals) which I plot on the same graph to get a visual idea on how close the model outputs and reference data is. In terms of the model outputs I have several thousand files (number sequentially) which I open in a loop and plot. The problem/question I have is whether I can preprocess the data and then plot later - to save time. The issue I seem to be having when I try this is that I have a legend which either does not appear or is inaccurate. My code (apolgies if it not elegant): fn= xlsread(['tbobserved' '.xls']); time= fn(:,1); totalreference=fn(:,4); totalreferencelowerci=fn(:,6); totalreferenceupperci=fn(:,7); figure plot(time,totalrefrence,'-', time, totalreferencelowerci,'--', time, totalreferenceupperci,'--'); xlabel('Year'); ylabel('Reference incidence per 100,000 population'); title ('Total'); clickableLegend('Observed reference data', 'Totalreferencelowerci', 'Totalreferenceupperci','Location','BestOutside'); xlim([1910 1970]); hold on start_sim=10000; end_sim=10005; h = zeros (1,1000); for i=start_sim:end_sim %is there any way of doing this earlier to save time? a=int2str(i); incidenceFile =strcat('result_', 'Sim', '_', a, 'I_byCal_total.xls'); est_tot=importdata(incidenceFile, '\t', 1); cal_tot=est_tot.data; magnitude=1; t1=cal_tot(:,1)+1750; totalmodel=cal_tot(:,3)+cal_tot(:,5); h(a)=plot(t1,totalmodel); xlim([1910 1970]); ylim([0 500]); hold all clickableLegend(h(a),a,'Location','BestOutside') end Essentially I was hoping to have a way of reading in the data and then plot later - ie. optimise the code. I hope you might be able to help. Thanks. mp

    Read the article

  • is it posible to upload directly to remote server using SFTP on ASP.net MVC

    - by DucDigital
    Hi! I am currently develope something using asp.net MVC, im still quite not experience with it so please help me out. I have a form for user to upload Video. The current ideal concept to upload to remote server is to Upload it to to the current server, then use FTP to push it to a remote server. For me, this is not quite fast since you have to upload to current server (Time x1) and then the current server push to new server (Time x2) so it's double the time. So my idea is to make user upload it to the current server, and WHILE user is uploading, the current server add the file to DB and also send the file to the remote server at the same time using SFTP... is it posible and are there any security hole in this concept? Thank you very much

    Read the article

  • What is the best API in any language for Audio and MIDI music application development?

    - by noneme
    What, in your opinion, is the best API to utilize in developing an application that handles both realtime MIDI and audio input and output? This would be for an application that is used in the process of making music as opposed to playing audio or MIDI files. I'm aware that this may be a subjective question, but if you know of an API that is dominantly used for these purposes, please share it. I'm agnostic about which language the API is for, and I also don't care about portability. The real concern is for an API that is well documented, well designed (e.g. thought out and intuitive to developers using it), and actively maintained. OS portability would be nice, but it is second to having an API/Language that meets the previous requirements. Please note that the emphasis is not on API's for sound synthesis or for composing music with code. It is intended for the handling of sound file and MIDI data in a real-time context.

    Read the article

  • Database triggers / referential integrity and in-memory caching

    - by Ran Biron
    Do you see database triggers / referential integrity rules being used in a way that changes actual data in the database (changing row w in table x causes a change in row y in table z)? If yes, How does this tie-in with the increasing popularity of in-memory caching (memcache and friends)? After all, these actions occur inside the database but the caching system must be aware of them in order to reflect to correct state (or at least invalidate the possibly changed state). I find it hard to believe that callbacks are implemented for such cases. Does anyone have real-world experience with such a setup / real-world experience with considering such a setup and abandoning it (which way did you go? if caching, how do you enforce integrity?)

    Read the article

  • What are the best settings of the H2 database for high concurrency?

    - by dexter
    There are a lot of settings that can be used in H2 database. AUTO_SERVER, MVCC, LOCK_MODE, FILE_LOCK and MULTI_THREADED. I wonder what combination works best for high concurrency setup e.g. one thread is doing INSERTs and another connection does some UPDATEs and SELECTs? I tried MVCC=TRUE;LOCK_MODE=3lFILE_LOCK=NO but whenever I do some UPDATEs in one connection, the other connection does not see it even though I commit it. By the way the connections are from different processes e.g. separate program.

    Read the article

  • How to flash the JFrame on Windows taskbar when it needs attention?

    - by japflap7stackoverflow
    Hi, i'm a computer science student working on a Yahoo Messenger - like program implemented in Java. My problem is that whenever the JTextArea inside my frame contains new message updates, the user must be prompted even when his/her frame is minimized. Is there a workaround on how to make the JFrame on the taskbar blink when updates are received? In short, i badly need a way to notify the user that the frame has been updated even though it is minimized. Is that possible? Thanks for you help. ^_^

    Read the article

  • Somewhat newb question about assy and the heap.

    - by Eric M
    Ultimately I am just trying to figure out how to dynamically allocate heap memory from within assembly. If I call Linux sbrk() from assembly code, can I use the address returned as I would use an address of a statically (ie in the .data section of my program listing) declared chunk of memory? I know Linux uses the hardware MMU if present, so I am not sure if what sbrk returns is a 'raw' pointer to real RAM, or is it a cooked pointer to RAM that may be modified by Linux's VM system? I read this: How are sbrk/brk implemented in Linux?. I suspect I can not use the return value from sbrk() without worry: the MMU fault on access-non-allocated-address must cause the VM to alter the real location in RAM being addressed. Thus assy, not linked against libc or what-have-you, would not know the address has changed. Does this make sense, or am I out to lunch?

    Read the article

  • TFS Continuous Developement major project update

    - by mamu
    We are using TFS Continuous Development model Main Trunk - Various Development branches - Various Release branches All merging back to main trunk Now we need some major changes to our folder structure and solutions How do you handle folder restructure in above model of TFS usage? do i need to draw line and create new structure from latest Main trunk and lock all branches and do updates then creates branches with restructured new trunk. Or am i underestimating TFS, would be able to handle major folder structure updates and propagate over to branches. As long as i know if we move around folders in branch or trunk it don't like it.

    Read the article

  • How to make the eclipse IDE to build faster

    - by Solitaire
    Hi all.. i am using Eclipse IDE for developmental purpose, IDE is taking too much time to build, it gets hangs up, when the percentage of Build reaches to 78. it shows refreshing workspace several times.. it eats up lots of time.. please tell me how to make it to disable the unwanted "refreshing workspace" and other time consuming activities, and make the build faster. Thanks

    Read the article

  • Io exception: There is no process to read data written to a pipe.

    - by Srikanth
    I'm using Hibernate3.2+Websphere6.0+struts1.3.. After deploying ,application works fine. After some idle time ,i will get this type of error repeatedly,am not able to login at all. Im not using any connection pooling. i feel after idle time its not able to connect to the database again..if i restart the server everything works fine for some time...after that same story.. please help me out

    Read the article

  • How can I determine when an InnoDB table was last changed?

    - by David M
    I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the ctime for an InnoDB in its SHOW TABLE STATUS line was usually its actual last update time, but that doesn't seem to be true for MySQL 5.1. There is a DATETIME field in the table, but it only shows when a row has been modified - it cannot show the deletion time of a row that's not there anymore! So, I really cannot use MAX(update_time). Here's the really tricky part. I have a number of replicas that I do reads from. Can I figure out the state of the table that doesn't rely on when the changes have actually been applied? My conclusion after working on this for a while is that it's not going to be possible to get this information as cheaply as I'd like. I'm probably going to cache data until the time that I expect the table to change (it's updated once a day), and let the query cache help out where it can.

    Read the article

  • "|" pipe operator not working in command line in C++

    - by user332024
    I am having a windows application interacting with DB2 database. In my application i have code to execute some DB2 commands through command line interface. I have used windowsAPI "ShellExecuteEx()" to execute those DB2 commands through command line. Following is the code written to execute DB2 command through command line. string command = "/c /w /i DB2 UNCATALOG NODE DB_DATABASE "" test.log | echo %date% %time% test.log SHELLEXECUTEINFO shellInfo; ZeroMemory(&shellInfo, sizeof(shellInfo)); shellInfo.cbSize = sizeof(shellInfo); shellInfo.fMask = SEE_MASK_FLAG_NO_UI | SEE_MASK_NOCLOSEPROCESS; //shellInfo.lpFile = "db2cmd"; shellInfo.lpFile = "db2cmd"; shellInfo.lpParameters = command.c_str(); The code is executed successfully , however if test.log is observered i only get result of DB2 command and not date and time. If you see the above command there is "|" pipe operator and echo command to log date and time in test.log Please note that if I execute above DB2 command through separately command line i.e. not through code. I am able to view date and time log along with DB2 command result in test.log. Following is the full command which i executed through command line. DB2CMD /c /i /w DB2 UNCATALOG NODE DB_DATABASE "" test.log | echo %date% %time% test.log According to me since DB2 command is executed successfully through code, there is problem with only usage of "|" pipe operator or echo command.

    Read the article

  • Easy Python input question

    - by Josh K
    I'd like to have something similar to the following pseudo code: while input is not None and timer < 5: input = getChar() timer = time.time() - start if timer >= 5: print "took too long" else: print input Anyway to do this without threading? I would like an input method that returns whatever has been entered since the last time it was called, or None (null) if nothing was entered.

    Read the article

  • How to write a JOIN statement to combine data from disparate tables

    - by Amarundo
    I have the following 2 procedures that I use as my source for a report. As of now, I'm presenting 2 different tables in my SQL Server Reporting Services 2008 R2 report, because it doesn't let me put them together as they belong to 2 different data sets. I want to present them in a single table, but I have not been successful trying to use JOIN here. How do I do that? NOTE: cName in IAgentQueueStats corresponds to UserId in AgentActivityLog. /*** Aggregate values for Call Center Agents for calls, talk and hold time ***/ /*** The detail/row values is per 30-minute interval ***/ ALTER PROCEDURE [dbo].[sp_IAgentQueueStats_OnlyCalls_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [cName] ,sum([nAnswered]) SumNAnswered ,sum([nAnsweredAcd]) SumNAnsweredAcd ,sum([tTalkAcd]) SumTTalkAcd ,sum([nHoldAcd]) SumNHoldAcd ,sum([tHoldAcd]) SumTHoldAcd ,sum([tAcw]) SumTAcw FROM [I3_IC].[dbo].[IAgentQueueStats] WHERE dIntervalStart between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( cName ,@p_Agents)> 0 AND cReportGroup <> '*' AND cHKey3 = '*' and cHKey4 ='*' AND nEnteredAcd > 0 AND cReportGroup <> 'CCFax Email' GROUP BY cName And here is the second one: /*** Aggregate values for Call Center Agents for status/activity time ***/ /*** The detail/row values is per start-time/end-time ***/ ALTER PROCEDURE [dbo].[sp_AgentActivity_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [UserId],[StatusCategory],SUM([StateDuration]) [StatusDuration] FROM ( SELECT [UserId] ,[StatusGroup] ,[StatusKey] , CASE [StatusKey] WHEN 'Available' THEN 'Productive' WHEN 'Follow Up' THEN 'Productive' WHEN 'Campaign Call' THEN 'Productive' WHEN 'Awaiting Callback' THEN 'Productive' WHEN 'In a Meeting' THEN 'Not Your Fault' WHEN 'Project Work' THEN 'Not Your Fault' WHEN 'At a Training Session'THEN 'Not Your Fault' WHEN 'System Issues' THEN 'Not Your Fault' WHEN 'Test' THEN 'Not Your Fault' WHEN 'At Lunch' THEN 'Non Productive' WHEN 'Available, Forward' THEN 'Non Productive' WHEN 'Available, Follow-Me' THEN 'Non Productive' WHEN 'At Play' THEN 'Non Productive' WHEN 'AcdAgentNotAnswering' THEN 'Non Productive' WHEN 'Do Not Disturb' THEN 'Non Productive' WHEN 'Available, No ACD' THEN 'Non Productive' WHEN 'Away from desk' THEN 'Non Productive' ELSE [StatusKey] END StatusCategory ,stateduration FROM [I3_IC].[dbo].[AgentActivityLog] WHERE [StatusDateTime] between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( [UserId] ,@p_Agents)> 0 AND [StatusKey] not in ('Gone Home','Out of the Office','On Vacation','Out of Town') ) a GROUP BY [UserId],[StatusCategory] ORDER BY [UserId], [StatusCategory] desc BTW, if I take some time to comment/reply on your posts, it's not lack of interest, but of understanding...

    Read the article

  • Python implementation of avro slow?

    - by lazy1
    I'm reading some data from avro file using the avro library. It takes about a minute to load 33K objects from the file. This seem very slow to me, specially with the Java version reading the same file in about 1sec. Here is the code, am I doing something wrong? import avro.datafile import avro.io from time import time def load(filename): fo = open(filename, "rb") reader = avro.datafile.DataFileReader(fo, avro.io.DatumReader()) for i, record in enumerate(reader): pass return i + 1 def main(argv=None): import sys from argparse import ArgumentParser argv = argv or sys.argv parser = ArgumentParser(description="Read avro file") start = time() num_records = load("events.avro") end = time() print("{0} records in {1} seconds".format(num_records, end - start)) if __name__ == "__main__": main()

    Read the article

  • How to Implement Dynamic Timestamp in Web Page?

    - by Morgan Cheng
    In Facebook and twitter, we can see that there is a timestamp like "23 seconds ago" or "1 hour ago" for each event & tweet. If we leave the page for some time, the timestamp changes accordingly. Since it is possible that user machine doesn't have same system time as server machine, how to make the dynamic timestamp accurate? My idea is: It is always based on server time. When request is sent to server for the web page, timestamp T1 (seconds to 1970/1/1) is rendered into inline javascript variable. The displayed timestamp ("23 seconds ago") is calculated by T1 instead of local time. I'm not sure whether this is how Facebook/Twitter do it. Is there any better idea?

    Read the article

  • How to deal with large result sets with Linq to Entities?

    - by user169867
    I have a fairly complex linq to entities query that I display on a website. It uses paging so I never pull down more than 50 records at a time for display. But I also want to give the user the option to export the full results to Excel or some other file format. My concern is that there could potentially be a large # of records all being loaded into memory at one time to do this. Is there a way to process a linq result set 1 record at a time like you could w/ a datareader so only 1 record is really being kept in memory at a time? I'd appreciate any help. Thanks

    Read the article

  • Randomly sorting an array

    - by Cam
    Does there exist an algorithm which, given an ordered list of symbols {a1, a2, a3, ..., ak}, produces in O(n) time a new list of the same symbols in a random order without bias? "Without bias" means the probability that any symbol s will end up in some position p in the list is 1/k. Assume it is possible to generate a non-biased integer from 1-k inclusive in O(1) time. Also assume that O(1) element access/mutation is possible, and that it is possible to create a new list of size k in O(k) time. In particular, I would be interested in a 'generative' algorithm. That is, I would be interested in an algorithm that has O(1) initial overhead, and then produces a new element for each slot in the list, taking O(1) time per slot. If no solution exists to the problem as described, I would still like to know about solutions that do not meet my constraints in one or more of the following ways (and/or in other ways if necessary): the time complexity is worse than O(n). the algorithm is biased with regards to the final positions of the symbols. the algorithm is not generative. I should add that this problem appears to be the same as the problem of randomly sorting the integers from 1-k, since we can sort the list of integers from 1-k and then for each integer i in the new list, we can produce the symbol ai.

    Read the article

  • Countdown timer using jquery or google app engine ?

    - by john
    hi everybody, I need to do a countdown clock, that counts down the days, hours, minutes and seconds that are left to a date of my choice,Using jquery or google app engine(Python). I created a timer using Javascript,But in that i used system time. I need to use server time.Can any body give me ideas to build up a count down Timer using server UTC time.

    Read the article

  • Why can I not send more than one request?

    - by Doug
    function stateChanged(idname) { xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById(idname).value = xmlhttp.responseText; } } } function openSend(php,idname) { stateChanged(idname); xmlhttp.open("GET",php,true); xmlhttp.send(); } function showHint() { if (window.XMLHttpRequest) { xmlhttp=new XMLHttpRequest(); } else { xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } openSend("time.php", "Time"); openSend("date1.php", "Date1"); openSend("date2.php", "Date2"); return; } These two say aborted (in Firebug) and doesn't return a value. Why is that? Is it because I can't send more than 1 request? openSend("time.php", "Time"); openSend("date1.php", "Date1"); If I can't, how could I achieve 3 requests with only one invocation?

    Read the article

  • speed up calling lot of entities, and getting unique values, google app engine python

    - by user291071
    OK this is a 2 part question, I've seen and searched for several methods to get a list of unique values for a class and haven't been practically happy with any method so far. So anyone have a simple example code of getting unique values for instance for this code. Here is my super slow example. class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() def uniqueLinkGet(tabl): start = time.time() dic = {} query = tabl.all() for obj in query: dic[obj.link]=1 end = time.time() print end-start return dic My second question is calling for instance an iterator instead of fetch slower? Is there a faster method to do this code below? Especially if the number of elements called be larger than 1000? query = LinkRating2.all() link1 = 'some random string' a = query.filter('link = ', link1) adic ={} for itema in a: adic[itema.user]=itema.rating2

    Read the article

  • Books on data-intensive enterprise integration patterns

    - by Tristan
    I'm trying to understand design patterns used by data-intensive enterprise applications. A classic example is the financial industry, where system must consume, analyze, and execute on real-time financial data while providing information and configuration options to a broad set of traders and analysts. One can imagine similar system in airlines, major supply chain operations, and utility providers. Are there good books that provide and inside view of how these systems work? Enterprise Integration Patterns is one example, but I'm looking for something with more real-world applications, particularly in finance.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >