Search Results

Search found 8019 results on 321 pages for 'while loop'.

Page 112/321 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

  • Javascript Onclick Problem with Table Rows

    - by Shane Larson
    Hello. I am having problems with my JScript code. I am trying to loop through all of the rows in a table and add an onclick event. I can get the onclick event to add but have a couple of problems. The first problem is that all rows end up getting set up with the wrong parameter for the onclick event. The second problem is that it only works in IE. Here is the code excerpt... shanesObj.addTableEvents = function(){ table = document.getElementById("trackerTable"); for(i=1; i<table.getElementsByTagName("tr").length; i++){ row = table.getElementsByTagName("tr")[i]; orderID = row.getAttributeNode("id").value; alert("before onclick: " + orderID); row.onclick=function(){shanesObj.tableRowEvent(orderID);}; }} shanesObj.tableRowEvent = function(orderID){ alert(orderID);} The table is located at the following location... http://www.blackcanyonsoftware.com/OrderTracker/testAJAX.html The id's of each row in sequence are... 95, 96, 94... For some reason, when shanesObj.tableRowEvent is called, the onclick is set up for all rows with the last value id that went through iteration on the loop (94). I added some alerts to the page to illustrate the problem. Thanks. Shane

    Read the article

  • Why is TransactionScope operation is not valid?

    - by Cragly
    I have a routine which uses a recursive loop to insert items into a SQL Server 2005 database The first call which initiates the loop is enclosed within a transaction using TransactionScope. When I first call ProcessItem the myItem data gets inserted into the database as expected. However when ProcessItem is called from either ProcessItemLinks or ProcessItemComments I get the following error. “The operation is not valid for the state of the transaction” I am running this in debug with VS 2008 on Windows 7 and have the MSDTC running to enable distributed transactions. The code below isn’t my production code but is set out exactly the same. The AddItemToDatabase is a method on a class I cannot modify and uses a standard ExecuteNonQuery() which creates a connection then closes and disposes once completed. I have looked at other posting on here and the internet and still cannot resolve this issue. Any help would be much appreciated. using (TransactionScope processItem = new TransactionScope()) { foreach (Item myItem in itemsList) { ProcessItem(myItem); } processItem.Complete(); } private void ProcessItem(Item myItem) { AddItemToDatabase(myItem); ProcessItemLinks(myItem); ProcessItemComments(myItem); } private void ProcessItemLinks(Item myItem) { foreach (Item link in myItem.Links) { ProcessItem(link); } } private void ProcessItemComments(Item myItem) { foreach (Item comment in myItem.Comments) { ProcessItem(comment); } } Here is top part of the stack trace. Unfortunatly I cant show the build up to this point as its company sensative information which I can not disclose. Hope this is enough information. at System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) at System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open()

    Read the article

  • Stop all sounds permanently in AS3

    - by Fahim Akhter
    Hi, I have a main swf which has sound on/off buttons. It has many SWF's which are loaded into different placeholders at different time. All of them have different sounds in them. In addition to that there is a music loop going on in the background. Now, this perticular swf lets call it Father will be placed in some swf later on. What I'm trying to do right now is when the sound off button is pressed turn off all the sounds permanently so the child swf's and their sounds stop too. I have found a way in AS2 : global = new Sound( ) //no movie clip target path. and add the following code to your off button. on(release){ global.setVolume(0); //mutes all sound } and for on button on(release){ global.setVolume(100); //un mutes all sound } which ofcourse does not work for AS3. So how can I stop all sounds permanently in AS3? Secondly what if I have one sound (my background loop) which I want to keep going on. Would be a lot of help, been searching for a while now without a appropriate answer.

    Read the article

  • scipy.io typeerror:buffer too small for requested array

    - by kartiku
    I have a problem in python. I'm using scipy, where i use scipy.io to load a .mat file. The .mat file was created using MATLAB. listOfFiles = os.listdir(loadpathTrain) for f in listOfFiles: fullPath = loadpathTrain + '/' + f mat_contents = sio.loadmat(fullPath) print fullPath Here's the error: Traceback (most recent call last): File "tryRankNet.py", line 1112, in demo() File "tryRankNet.py", line 645, in demo mat_contents = sio.loadmat(fullPath) File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/mio.py", line 111, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py", line 356, in get_variables getter = self.matrix_getter_factory() File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.py", line 602, in matrix_getter_factory return self._array_reader.matrix_getter_factory() File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.py", line 274, in matrix_getter_factory tag = self.read_dtype(self.dtypes['tag_full']) File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py", line 171, in read_dtype order='F') TypeError: buffer is too small for requested array The whole thing is in a loop, and I checked the size of the file where it gives the error by loading it interactively in IDLE. The size is (9,521), which is not at all huge. I tried to find if I'm supposed to clear the buffer after each iteration of the loop, but I could not find anything. Any help would be appreciated. Thanks.

    Read the article

  • Actionscript: NetStream stutters after buffering.

    - by meandmycode
    Using NetStream to stream content from http, I've noticed that esp with certain exported h264's, if the player encounters an empty buffer, it will stop and buffer to the requested length (as expected). However once the buffer is full, the playback doesn't resume, but instead jumps ahead, as such- instantly playing the buffered duration in a brief moment, and thusly triggering an empty buffer again.. this will then continue over and over. Presumably when the netstream pauses to buffer, the playhead position continues, and the player is attempting to snap to that position on resume- however given it could take 5 seconds to build a 2 second buffer- it ends up with a useless buffer again.. (this is an assumption) I've attempted to work around this by listening for an empty buffer netstatus event, pausing the stream, and at the same time setting up a loop to check the current buffer length vs the requested buffer length.. and resuming once the buffer length is greater than or equal to the requested buffer.. however this causes problems when there isn't enough of the video remaining.. for example, a 10 second buffer with only 5 seconds remaining, the loop just sits there waiting for a buffer length of 10 seconds when theres only 5 left... You would think that you could simply check which was smaller, the time left or the requested buffer length.. however the times flash gives are not accurate.. If you add the net streams current time index, plus the buffered time, the total is not the entire duration of the movie (when at the end).. it is close but not the same. This brings me back to the original problem, and if there is another way to fix this, clearly flash knows when the buffer is ready, so how can i get flash pause when it buffers, and resume once the buffer is ready? currently it doesn't.. it pauses and then once the buffer is full- it plays the entire buffered content in about .1 of a second. Thanks in advance, Stephen.

    Read the article

  • oracle pl/sql bug: can't put_line more than 2000 characters

    - by FrustratedWithFormsDesigner
    Has anyone else noticed this phenomenon where dbms_output.put_line is unable to print more than 2000 characters at a time? Script is: set serveroutput on size 100000; declare big_str varchar2(2009); begin for i in 1..2009 loop big_str := big_str||'x'; end loop; dbms_output.put_line(length(big_str)); dbms_output.put_line(big_str); end; / I copied and pasted the output into an editor (Notepad++) which told me there were only 2000 characters, not 2009 which is what I think should have been pasted. This also happens with a few of my test scripts - only 2000 characters get printed. I have a workaround to print like this: dbms_output.put_line(length(big_str)); dbms_output.put_line(substr(big_str,1,1999)); dbms_output.put_line(substr(big_str,2000)); This adds new lines to the output, makes it hard to read when the text you're working with is preformatted. Has anyone else noticed this? Is it really a bug or some sort of obscure feature? Is there a better workaround? Is there any other information on this out there? Oracle version is: 10.2.0.3.0, using PL/SQL Developer (from Allround Automation).

    Read the article

  • How to include multiple tables programmaticaly into a Sweave document using R

    - by PaulHurleyuk
    Hello, I want to have a sweave document that will include a variable number of tables in. I thought the example below would work, but it doesn't. I want to loop over the list foo and print each element as it's own table. % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \listfiles \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] library(RODBC) library(psych) library(xtable) library(plyr) library(ggplot2) options(width=80) #Produce some example data, here I'm creating some dummy dataframes and putting them in a list foo<-list() foo[[1]]<-data.frame(GRP=c(rep("AA",10), rep("Aa",10), rep("aa",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[2]]<-data.frame(GRP=c(rep("BB",10), rep("bB",10), rep("BB",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[3]]<-data.frame(GRP=c(rep("CC",12), rep("cc",18)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[4]]<-data.frame(GRP=c(rep("DD",10), rep("Dd",10), rep("dd",10)), X1=rnorm(30), X2=rnorm(30,5,2)) @ \title{Docuemnt to test putting a variable number of tables into a sweave Document} \author{"Paul Hurley"} \maketitle \section{Text} This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx \input{time} sec to process. <<label=test, echo=FALSE, results=tex>>= cat("Foo") @ that was a test, so is this <<label=table1test, echo=FALSE, results=tex>>= print(xtable(foo[[1]])) @ \newpage \subsection{Tables} <<label=Tables, echo=FALSE, results=tex>>= for(i in seq(foo)){ cat("\n") cat(paste("Table_",i,sep="")) cat("\n") print(xtable(foo[[i]])) cat("\n") } #cat("<<label=endofTables>>= ") @ <<label=bye, include=FALSE, echo=FALSE>>= endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= fileConn<-file("time.tex", "wt") writeLines(as.character(elapsedtime), fileConn) close(fileConn) @ \end{document} Here, the table1test chunk works as expected, and produced a table based on the dataframe in foo[[1]], however the loop only produces Table(underscore)1.... Any ideas what I'm doing wrong ?

    Read the article

  • cleaning up noise in an edge detection algoritum

    - by Faken
    I recently wrote an extremely basic edge detection algorithm that works on an array of chars. The program was meant to detect the edges of blobs of a single particular value on the array and worked by simply looking left, right, up and down on the array element and checking if one of those values is not the same as the value it was currently looking at. The goal was not to produce a mathematical line but rather a set of ordered points that represented a descritized closed loop edge. The algorithm works perfectly fine, except that my data contained a bit of noise hence would randomly produce edges where there should be no edges. This in turn wreaked havoc on some of my other programs down the line. There is two types of noise that the data contains. The first type is fairly sparse and somewhat random. The second type is a semi continuous straight line on the x=y axis. I know the source of the first type of noise, its a feature of the data and there is nothing i can do about it. As for the second type, i know it's my program's fault for causing it...though i haven't a hot clue exactly what is causing it. My question is: How should I go about removing the noise completely? I know that the correct data has points that are always beside each other and is very compact and ordered (with no gaps) and is a closed loop or multiple loops. The first type of noise is usually sparse and random, that could be easily taken care of by checking if any edges is next that noise point is also counted as an edge. If not, then the point is most defiantly noise and should be removed. However, the second type of noise, where we have a semi continuous line about x=y poses more of a problem. The line is sometimes continuous for random lengths (the longest was it went half way across my entire array unbroken). It is even possible for it to intersect the actual edge. Any ideas on how to do this?

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

  • For "draggable" div tags that are NOT nested: JQuery/JavaScript div tag “containment” approach/algor

    - by Pete Alvin
    Background: I've created an online circuit design application where .draggable() div tags are containers that contain smaller div containers and so forth. Question: For any particular div tag I need to quickly identify if it contains other div tags (that may in turn contain other div tags). -- Since the div tags are draggable, in the DOM they are NOT nested inside each other but I think are absolutely positioned. So I think that a "hit testing" approach is the only way to determine containment, unless there is some "secret" routine built-in somewhere that could help with this. I've searched JQuery and I don't see any built-in routine for this. Does anyone know of an algorithm that's quicker than O(n^2)? Seems like I have to walk the list of div tags in an outer loop (n) and have an inner loop (another n) to compare against all other div tags and do a "containment test" (position, width, height), building a list of contained div tags. That's n-squared. Then I have to build a list of all nested div tags by concatenating contained lists. So the total would be O(n^2)+n. There must be a better way?

    Read the article

  • TypeError: coercing to Unicode: need string or buffer, User found

    - by Clemens
    hi, i have to crawl last.fm for users (university exercise). I'm new to python and get following error: Traceback (most recent call last): File "crawler.py", line 23, in <module> for f in user_.get_friends(limit='200'): File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 2717, in get_friends for node in _collect_nodes(limit, self, "user.getFriends", False): File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 3409, in _collect_nodes doc = sender._request(method_name, cacheable, params) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 969, in _request return _Request(self.network, method_name, params).execute(cacheable) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 721, in __init__ self.sign_it() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 727, in sign_it self.params['api_sig'] = self._get_signature() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylast.py", line 740, in _get_signature string += self.params[name] TypeError: coercing to Unicode: need string or buffer, User found i use the pylast lib for crawling. what i want to do: i want to get a users friends and the friends of the users friends. the error occurs, when i have a for loop in another for loop. here's the code: network = pylast.get_lastfm_network(api_key = API_KEY, api_secret = API_SECRET, username = username, password_hash = password_hash) user = network.get_user("vidarnelson") friends = user.get_friends(limit='200') i = 1 for friend in friends: user_ = network.get_user(friend) print '#%d %s' % (i, friend) i = i + 1 for f in user_.get_friends(limit='200'): print f any advice? thanks in advance. regards!

    Read the article

  • Drawing Color Spectrum with Waveform

    - by TheDarkIn1978
    i've come across this ActionScript sample, which demonstrates drawing of the color spectrum, one line at a time via a loop, using waveforms. however, the waveform location of each RGB channel create a color spectrum that is missing colors (pure yellow, cyan and magenta) and therefore the spectrum is incomplete. how can i remedy this problem so that the drawn color spectrum will exhibit all colors? // Loop through all of the pixels from '0' to the specified width. for(var i:int = 0; i < nWidth; i++) { // Calculate the color percentage based on the current pixel. nColorPercent = i / nWidth; // Calculate the radians of the angle to use for rotating color values. nRadians = (-360 * nColorPercent) * (Math.PI / 180); // Calculate the RGB channels based on the angle. nR = Math.cos(nRadians) * 127 + 128 << 16; nG = Math.cos(nRadians + 2 * Math.PI / 3) * 127 + 128 << 8; nB = Math.cos(nRadians + 4 * Math.PI / 3) * 127 + 128; // OR the individual color channels together. nColor = nR | nG | nB; }

    Read the article

  • Error when calling Mysql Stored procedure

    - by devuser
    This is my stored procedure to search throgh all databases,tables and columns. This procedure got created with out any error. DELIMITER $$ DROP PROCEDURE IF EXISTS mydb.get_table$$ CREATE DEFINER=root@% PROCEDURE get_table(in_search varchar(50)) READS SQL DATA BEGIN DECLARE trunc_cmd VARCHAR(50); DECLARE search_string VARCHAR(250); DECLARE db,tbl,clmn CHAR(50); DECLARE done INT DEFAULT 0; DECLARE COUNTER INT; DECLARE table_cur CURSOR FOR SELECT concat('SELECT COUNT(*) INTO @CNT_VALUE FROM ', table_schema,'.', table_name, ' WHERE ', column_name,' REGEXP ''',in_search,'''' ) ,table_schema,table_name,column_name FROM information_schema.COLUMNS WHERE TABLE_SCHEMA NOT IN ('mydb','information_schema'); DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; #Truncating table for refill the data for new search. PREPARE trunc_cmd FROM 'TRUNCATE TABLE temp_details'; EXECUTE trunc_cmd ; OPEN table_cur; table_loop:LOOP FETCH table_cur INTO search_string,db,tbl,clmn; #Executing the search SET @search_string = search_string; SELECT search_string; PREPARE search_string FROM @search_string; EXECUTE search_string; SET COUNTER = @CNT_VALUE; SELECT COUNTER; IF COUNTER0 THEN # Inserting required results from search to table INSERT INTO temp_details VALUES(db,tbl,clmn); END IF; IF done=1 THEN LEAVE table_loop; END IF; END LOOP; CLOSE table_cur; #Finally Show Results SELECT * FROM temp_details; END$$ DELIMITER ; But when calling this procedure following error occurs. call get_table('aaa') Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'delete REGEXP 'aaa'' at line 1 (0 ms taken)

    Read the article

  • Can a sub-procedure procedure lock and modify the same rows FOR UPDATE that its calling procedure al

    - by RenderIn
    Will the following code lead to a deadlock or should it work without any problem? I've got something similar and it's working but I didn't think it would. I thought the parent procedure's lock would have resulted in a deadlock for the child procedure but it doesn't seem to be. If it works, why? My guess is that the nested FOR UPDATE is not running into a deadlock because it's smart enough to realize that it is being called by the same procedure that has the current lock. Would this be a deadlock if FOO_PROC was not a nested procedure? DECLARE FOO_PROC(c_someName VARCHAR2) as cursor c1 is select * from awesome_people where person_name = c_someName FOR UPDATE; BEGIN open c1; update awesome_people set person_name = UPPER(person_name); close c1; END FOO_PROC; cursor my_cur is select * from awesome_people where person_name = 'John Doe' FOR UPDATE; BEGIN for onerow in c1 loop FOO_PROC(onerow.person_name); end loop; END;

    Read the article

  • custom C++ boost::lambda expression help

    - by aaa
    hello. A little bit of background: I have some strange multiple nested loops which I converted to flat work queue (basically collapse single index loops to single multi-index loop). right now each loop is hand coded. I am trying to generalized approach to work with any bounds using lambda expressions: For example: // RANGE(i,I,N) is basically a macro to generate `int i = I; i < N; ++i ` // for (RANGE(lb, N)) { // for (RANGE(jb, N)) { // for (RANGE(kb, max(lb, jb), N)) { // for (RANGE(ib, jb, kb+1)) { // is equivalent to something like (overload , to produce range) flat<1, 3, 2, 4>((_2, _3+1), (max(_4,_3), N), N, N) the internals of flat are something like: template<size_t I1, size_t I2, ..., class L1_, class L2, ..._> boost::array<int,4> flat(L1_ L1, L2_ L2, ...){ //boost::array<int,4> current; class variable bool advance; L2_ l2 = L2.bind(current); // bind current value to lambda { L1_ l1 = L1.bind(current); //bind current value to innermost lambda l1.next(); advance = !(l1 < l1.upper()); // some internal logic if (advance) { l2.next(); current[0] = l1.lower(); } } //..., } my question is, can you give me some ideas how to write lambda (derived from boost) which can be bound to index array reference to return upper, lower bounds according to lambda expression? thank you much

    Read the article

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • Graph navigation problem

    - by affan
    I have graph of components and relation between them. User open graph and want to navigate through the graph base on his choice. He start with root node and click expand button which reveal new component that is related to current component. The problem is with when use decide to collapse a node. I have to choose a sub-tree to hide and at same time leave graph in consistent state so that there is no expanded node with missing relation to another node in graph. Now in case of cyclic/loop between component i have difficult of choosing sub-tree. For simplicity i choose the order in which they were expanded. So if a A expand into B and C collapse A will hide the nodes and edge that it has created. Now consider flowing scenario. [-] mean expanded state and [+] mean not yet expanded. A is expanded to reveal B and C. And then B is expanded to reveal D. C is expanded which create a link between C and exiting node D and also create node E. Now user decide to collapse B. Since by order of expansion D is child of B it will collapse and hide D. This leave graph in inconsistent state as C is expanded with edge to D but D is not anymore there if i remove CD edge it will still be inconsistent. If i collapse C. And E is again a cyclic link e.g to B will produce the same problem. /-----B[-]-----\ A[-] D[+] \-----C[-]-----/ \ E[+] So guys any idea how can i solve this problem. User need to navigate through graph and should be able to collapse but i am stuck with problem of cyclic nodes in which case any of node in loop if collapse will leave graph in inconsistent state.

    Read the article

  • Metamorphs Messing Up CSS in Ember.js Views

    - by Austin Fatheree
    I'm using Ember.js / handlebars to loop through a collection and spit out some items that I'd like bootstrap to handle nice and responsive like. Here is the issue: The bootstrap-responsive css has some declrations in it like: .row-fluid > [class*="span"]:first-child { margin-left: 0; } and .row-fluid:before, .row-fluid:after { display: table; content: ""; } These rules seem to target the first children. When I loop through my collection in handlebars I end up with a bunch of metamorph code around my items: <div class="row-fluid"> {{#each restaurantList}} {{view GS.vHomePageRestList content=this class="span6"}} {{/each}} </div> Here is what is produced: <div class="row-fluid"> <script id="metamorph-9-start" type="text/x-placeholder"></script> <script id="metamorph-104-start" type="text/x-placeholder"></script> <div id="ember2527" class="ember-view span6"> My View </div> <script id="metamorph-104-end" type="text/x-placeholder"></script> <script id="metamorph-105-start" type="text/x-placeholder"></script> <div id="ember2574" class="ember-view span6"> My View 2 </div> <script id="metamorph-105-end" type="text/x-placeholder"></script> <script id="metamorph-9-end" type="text/x-placeholder"></script> </div> So my question is this: 1. How can I tell css to ignore script tags? or 2. How can I edit the css bindings so that they skip over script tags when selecting the first or first child? or 3. How can I structure this so that Ember uses fewer/no metamorph tags? Here is a fiddle: http://jsfiddle.net/skilesare/SgwsJ/

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • httprequest handle time delays till having response

    - by bourax webmaster
    I have an application that calls a function to send JSON object to a REST API, my problem is how can I handle time delays and repeat this function till i have a response from the server in case of interrupted network connexion ?? I try to use the Handler but i don't know how to stop it when i get a response !!! here's my function that is called when button clicked : protected void sendJson(final String name, final String email, final String homepage,final Long unixTime,final String bundleId) { Thread t = new Thread() { public void run() { Looper.prepare(); //For Preparing Message Pool for the child Thread HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); //Timeout Limit HttpResponse response; JSONObject json = new JSONObject(); //creating meta object JSONObject metaJson = new JSONObject(); try { HttpPost post = new HttpPost("http://util.trademob.com:5000/cards"); metaJson.put("time", unixTime); metaJson.put("bundleId", bundleId); json.put("name", name); json.put("email", email); json.put("homepage", homepage); //add the meta in the root object json.put("meta", metaJson); StringEntity se = new StringEntity( json.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); String authorizationString = "Basic " + Base64.encodeToString( ("tester" + ":" + "tm-sdktest").getBytes(), Base64.NO_WRAP); //Base64.NO_WRAP flag post.setHeader("Authorization", authorizationString); response = client.execute(post); String temp = EntityUtils.toString(response.getEntity()); Toast.makeText(getApplicationContext(), temp, Toast.LENGTH_LONG).show(); } catch(Exception e) { e.printStackTrace(); } Looper.loop(); //Loop in the message queue } }; t.start(); }

    Read the article

  • Place Query Results into Array then Implode?

    - by jason
    Basically I pull an Id from table1, use that id to find a site id in table2, then need to use the site ids in an array, implode, and query table3 for site names. I cannot implode the array correctly first I got an error, then used a while loop. With the while loop the output simply says: Array $mysqli = mysqli_connect("server", "login", "pass", "db"); $sql = "SELECT MarketID FROM marketdates WHERE Date = '2010-04-04 00:00:00' AND VenueID = '2'"; $result = mysqli_query($mysqli, $sql) or die(mysqli_error($mysqli)); $dates_id = mysqli_fetch_assoc ( $result ); $comma_separated = implode(",", $dates_id); echo $comma_separated; //This Returns 79, which is correct. $sql = "SELECT SIteID FROM bookings WHERE BSH_ID = '1' AND MarketID = '$comma_separated'"; $result = mysqli_query($mysqli, $sql) or die(mysqli_error($mysqli)); // This is where my problems start $SIteID = array(); while ($newArray = mysqli_fetch_array($result, MYSQLI_ASSOC)) { $SIteID[] = $newArray[SIteID]; } $locationList = implode(",",$SIteID); ?> Basically what I need to do is correctly move the query results to an array that I can implode and use in a 3rd query to pull names from table3.

    Read the article

  • Problem with NSUserDefaults and deallocated Instance

    - by Peter A
    Hi, I'm having a problem with NSUserDefaults. I've followed the steps in the books as closely as I can for my app, but still get the same problem. I am getting a *** -[NSUserDefaults integerForKey:]: message sent to deallocated instance 0x3b375a0 error when I try and load in the settings. Here is the code that I have, it is in the App Delegate class. - (void)applicationDidFinishLaunching:(UIApplication *)application { recordingController = [[RecordingTableViewController alloc] initWithStyle:UITableViewStylePlain]; [recordingController retain]; // Add the tab bar controller's current view as a subview of the window [window addSubview:tabBarController.view]; [self loadSettings]; } -(void)loadSettings { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; NSNumber loop = [defaults objectForKey:@"loop_preference"]; NSNumber play = [defaults objectForKey:@"play_me_preference"]; NSNumber volume = [defaults objectForKey:@"volume_preference"]; } As you can see I am not trying to do anything with the values yet, but I get the error on the line reading in the loop preference. I also get it if I try and read an NSString. Any suggestions would be greatly appreciated. Thanks Peter

    Read the article

  • UTL_FILE.FOPEN() procedure not accepting path for directory ?

    - by Vineet
    I am trying to write in a file stored in c:\ drive named vin1.txt and getting this error .Please suggest! > ERROR at line 1: ORA-29280: invalid > directory path ORA-06512: at > "SYS.UTL_FILE", line 18 ORA-06512: at > "SYS.UTL_FILE", line 424 ORA-06512: at > "SCOTT.SAL_STATUS", line 12 ORA-06512: > at line 1 HERE is the code create or replace procedure sal_status ( p_file_dir IN varchar2, p_filename IN varchar2) IS v_filehandle utl_file.file_type; cursor emp Is select * from employees order by department_id; v_dep_no departments.department_id%TYPE; begin v_filehandle :=utl_file.fopen(p_file_dir,p_filename,'w');--Opening a file utl_file.putf(v_filehandle,'SALARY REPORT :GENERATED ON %s\n',SYSDATE); utl_file.new_line(v_filehandle); for v_emp_rec IN emp LOOP v_dep_no :=v_emp_rec.department_id; utl_file.putf(v_filehandle,'employee %s earns:s\n',v_emp_rec.last_name,v_emp_rec.salary); end loop; utl_file.put_line(v_filehandle,'***END OF REPORT***'); UTL_FILE.fclose(v_filehandle); end sal_status; execute sal_status('C:\','vin1.txt');--Executing

    Read the article

  • C# Taking a element off each time (stack)

    - by Sef
    Greetings, Have a question considering a program that stimulates a stack.(not using any build in stack features or any such) stack2= 1 2 3 4 5 //single dimension array of 5 elements By calling up a method "pop" the stack should look like the following: Basically taking a element off each time the stack is being "called" up again. stack2= 1 2 3 4 0 stack2= 1 2 3 0 0 stack2= 1 2 0 0 0 stack2= 1 0 0 0 0 stack2= 0 0 0 0 0 - for (int i = 1; i <= 6; i++) { number= TryPop(s2); //use number ShowStack(s2, "s2"); } Basically I already have code that fills my array with values (trough a push method). The pop method should basically take the last value and place it on 0. Then calls up the next stack and place the following on 0 (like shown above in stack2). The current pop method that keeps track of the top index (0 elements = 0 top, 1 element = 1 top etc..). Already includes a underflow warning if this goes on 0 or below (which is correct). public int Pop() { if(top <= 0) { throw new Exception("Stack underflow..."); } else { for (int j = tabel.Length - 1; j >= 0; j--) { //...Really not sure what to do here. } } return number; }/*Pop*/ Since in the other class I already have a loop (for loop shown above) that simulates 6 times the s2 stack. (first stack: 1 2 3 4 0, second stack 1 2 3 0 0 and so on.) How exactly do I take a element off each time? Either I have the entire display on 0 or the 0 in the wrong places / out of index errors. Thanks in advance!

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >