Search Results

Search found 14236 results on 570 pages for 'times square'.

Page 467/570 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Jquery number of clicks record

    - by Maxime
    Hi, I am working on this page where I'm using a Jquery Mp3 Player (Jplayer) with its playlist. What I want to do is very simple in theory: I want to record the number of clicks for every playlist element. When someone enters the page, his number of clicks are at 0 for every element. The visitor can click a few times on element #1, then go to element #2 (which will be at 0), then come back to element #1 and the number of clicks must be saved. We don't need to save it for next visits. Jplayer has this function that loads each time a new playlist element is being loaded: function playListChange( index ) In which every element has its own ID dynamically updated: myPlayList[index].song_id Here's my code: function playListChange( index ) { var id = myPlayList[index].song_id; if(!o) { var o = {}; } if(!o[id]) { o[id] = 0; } alert(o[id]); … $("#mydiv").click { o[id] = o[id]+1; } … But o[id] is reset every time and the alert always show 0. Why? Thanks for any reply.

    Read the article

  • What is the complexity of this specialized sort

    - by ADB
    I would like to know the complexity (as in O(...) ) of the following sorting algorithm: there are B barrels that contains a total of N elements, spread unevenly across the barrels. the elements in each barrel are already sorted The sort takes combines all the elements from each barrel in a single sorted list: using an array of size B to store the last sorted element of each barrel (starting at 0) check each barrel (at the last stored index) and find the smallest element copy the element in the final sorted array, increment array counter increment the last sorted element for the barrel we picked from perform those steps N times or in pseudo: for i from 0 to N smallest = MAX_ELEMENT foreach b in B if bIndex[b] < b.length && b[bIndex[b]] < smallest smallest_barrel = b smallest = b[bIndex[b]] result[i] = smallest bIndex[smallest_barrel] += 1 I thought that the complexity would be O(n), but the problem I have with finding the complexity is that if B grows, it has a larger impact than if N grows because it adds another round in the B loop. But maybe that has no effect on the complexity?

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • Issue reading in a cell from Excel with Apache POI

    - by Nick
    I am trying to use Apache POI to read in old (pre-2007 and XLS) Excel files. My program goes to the end of the rows and iterates back up until it finds something that's not either null or empty. Then it iterates back up a few times and grabs those cells. This program works just fine reading in XLSX and XLS files made in Office 2010. I get the following error message: Exception in thread "main" java.lang.NumberFormatException: empty String at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source) at java.lang.Double.parseDouble(Unknown Source) at the line: num = Double.parseDouble(str); from the code: str = cell.toString(); if (str != "" || str != null) { System.out.println("Cell is a string"); num = Double.parseDouble(str); } else { System.out.println("Cell is numeric."); num = cell.getNumericCellValue(); } where the cell is the last cell in the document that's not empty or null. When I try to print the first cell that's not empty or null, it prints nothing, so I think I'm not accessing it correctly.

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • C++: calling non-member functions with the same syntax of member ones

    - by peoro
    One thing I'd like to do in C++ is to call non-member functions with the same syntax you call member functions: class A { }; void f( A & this ) { /* ... */ } // ... A a; a.f(); // this is the same as f(a); Of course this could only work as long as f is not virtual (since it cannot appear in A's virtual table. f doesn't need to access A's non-public members. f doesn't conflict with a function declared in A (A::f). I'd like such a syntax because in my opinion it would be quite comfortable and would push good habits: calling str.strip() on a std::string (where strip is a function defined by the user) would sound a lot better than calling strip( str );. most of the times (always?) classes provide some member functions which don't require to be member (ie: are not virtual and don't use non-public members). This breaks encapsulation, but is the most practical thing to do (due to point 1). My question here is: what do you think of such feature? Do you think it would be something nice, or something that would introduce more issues than the ones it aims to solve? Could it make sense to propose such a feature to the next standard (the one after C++0x)? Of course this is just a brief description of this idea; it is not complete; we'd probably need to explicitly mark a function with a special keyword to let it work like this and many other stuff.

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • Cocos2d - smooth sprite movement in tile map RPG

    - by Lendo92
    I've been working on a 2-D Gameboy style RPG for a while now, and the game logic is mostly all done so I'm starting to try to make things look good. One thing I've noticed is that the walking movement / screen movement is a little bit choppy. Technically, it should work fine, but either it seems to be having some quirks, either due to taking up a lot of processing power or just timing inconsistencies between moving the screen and moving the sprite. To move the sprite, once I know where I want to move it, I call: tempPos.y += 3*theHKMap.tileSize.width/5; id actionMove = [CCMoveTo actionWithDuration:0.1 position:tempPos]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(orientOneMove)]; [guy runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; [self setCenterOfScreen:position]; Then, in orientOneMove, I call: [self.guy setTexture:[[CCTextureCache sharedTextureCache] addImage:@"guysprite07.png"]]; //the walking picture-I change texture back at the end of the movement id actionMove = [CCMoveTo actionWithDuration:0.15 position:self.tempLocation2]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(toggleTouchEnabled)]; [guy runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; The code for the concurrently running setCenterOfScreen:position method is: id actionMove = [CCMoveTo actionWithDuration:0.25 position:difference]; [self runAction: [CCSequence actions:actionMove, nil, nil]]; So the setCenterOfScreen moves the camera in one clean move while the guy moving is chopped into two actions to animate it (which I believe might be inefficient.) It's hard to tell what is making the movement not perfectly clean from looking at it, but essentially the guy isn't always perfectly in the center of the screen -- during movement, he's often times a pixel or two off for an instant. Any ideas/ solutions?

    Read the article

  • How do I generate reports in R?

    - by Maiasaura
    I've been struggling for a week now trying to figure out how to generate reports in R using either Sweave or Brew. I should say right at the beginning that I have never used Tex before but I understand the logic of it. I have read this document several times. However, I cannot even get a simple example to parse. Brew successfully converts a simple markup file (just a title and some text) to a .tex file (no error). But it never ever converts tex to a pdf. > library(tools) > library(brew) > brew("population.brew", "population.tex") > texi2dvi("population.tex", pdf = TRUE) The last step always fails with: Error in texi2dvi("population.tex", pdf = TRUE) : Running 'texi2dvi' on 'population.tex' failed. What am I doing wrong? The report I am trying to build is fairly simple. I have 157 different analysis to summarize. Each one has 4 plots, 1 table and a summary. I just want output plot 1,2,3,4 output table \pagebreak ... that's it. Can anyone help me get further? I use osx, don't have Tex installed. thanks

    Read the article

  • Why does multiple sessions started on the same page gets the same session id?

    - by Calmarius
    I tryed the following: <?php session_name('user'); session_start(); // sets an 'user' cookie with session id. // This session stores the user's login data. // Its an ordinary session. $USERSESSION=$_SESSION; // saving this session $userSessId=session_id(); session_write_close(); //closing this session session_name('chatroom'); session_start(); // set a 'chatroom' cookie but contains the same session id and points to the same data :( . // This session would be used to store the chat text. This session would be // shared between all users joined in this chat room by setting the same session id for all members. // by calling session_regenerate_id would make a diffent session id for every user and it won't be a chat room anymore. ?> So I want to do a chat like thing with sessions. On the client side it would be done with ajax that polls this php page in every 5-10 seconds. Sessions may be cached in the server's memory so it can be accessible fast. I can store the chat in the database but my service runs on a free webhost which is limited, only 4 mysql connections allowed at a time which is almost nothing. I try to touch my database as least times as possible.

    Read the article

  • AddEvenLister fires off immediately after being attached

    - by Diego
    I want to know how to correctly add an Event Listener like onClick to a dynamic div. Here’s my snippet: var $menu = new Array(2); createDiv('yes'); function createDiv(title) { var $class = 'menuLink'; var $title = 'title'; for(var i=0; i<$menu.length;i++) { $title = 'title'; var newdiv = document.createElement('div'); newdiv.setAttribute('class', $class); newdiv.setAttribute('id', 'item'+i); newdiv.addEventListener('onclick', clickHandle(newdiv.id),false); if (title) { newdiv.innerHTML = $title; } else { newdiv.innerHTML = "No Title"; } document.getElementById('menu').appendChild(newdiv); showMenu(); } } function clickHandle(e) { if(e == 'item0') { alert('Link is' + ' ' + e); }else{ alert('Another Link'); } } Here's my problem, the snippet works fine, It creates my divs, adds id's values and all, but the event fires off as soon as the event is attached, so while the for loop is creating the divs the alert window says: Link is item0, and immediately after this it says: Another Link. Am I misunderstanding this? (I used this method zillions of times in AS3 with the expected result, just attached a function waiting for a click). What I want is that my Divs wait for the user to click on them, not to fire off immediately after of being attached. Thanks for any hint on this one. Greetings. Ps. IE doesn't support addEventListener, but I want to resolve this before cross to the IE sea of madness, but I will appreciate the IE approach as well. - I tried "onclick" and just 'click' into the addEventListener line, both with the same result described above.

    Read the article

  • Conditional insert as a single database transaction in HSQLDB 1.8?

    - by Kevin Pauli
    I'm using a particular database table like a "Set" data structure, i.e., you can attempt to insert the same row several times, but it will only contain one instance. The primary key is a natural key. For example, I want the following series of operations to work fine, and result in only one row for Oklahoma: insert into states_visited (state_name) values ('Oklahoma'); insert into states_visited (state_name) values ('Texas'); insert into states_visited (state_name) values ('Oklahoma'); I am of course getting an error due to the duplicate primary key on subsequent inserts of the same value. Is there a way to make the insert conditional, so that these errors are not thrown? I.e. only do the the insert if the natural key does not already exist? I know I could do a where clause and a subquery to test for the row's existence first, but it seems that would be expensive. That's 2 physical operations for one logical "conditional insert" operation. Anything like this in SQL? FYI I am using HSQLDB 1.8

    Read the article

  • Publishing WCF .NET 3.5 to IIS 5.1

    - by Adam
    I've been developing a WCF web service using .NET 3.5 with IIS7 and it works perfectly on my local computer. I tried publishing it to a server running IIS 5.1 and even though I can view the WSDL in my browser, the client application doesn't seem to be connecting to it correctly. I launched a packet sniffing app (Charles Proxy) and the response for the first message comes back to the client empty (0 bytes). Every message after the first one times out. The WCF service is part of a larger application that uses ASP .NET 3.5. That application has been working fine on IIS 5.1 for awhile now so I think it's something specific to WCF. I also tried throwing an exception in the SVC file to see if it made it that far and the exception never got thrown so I have a feeling it's something more low level that's not working. Any thoughts? Is there anything I need to install on the IIS5 server? If so how am I still able to view the WSDL in my browser?

    Read the article

  • Script/plugin to update web page (load next 25 comments) until page fully loaded

    - by Carl
    Brief summary: I need a script/plugin for Firefox that selects the "load next 25 comments" link at the bottom of a web page, until that link is no longer on the page. As you click that link - you get more comments - eventually all of them on the same page. See this web page for an example (there are 1,852 comments): http://www.cnn.com/2010/US/05/16/gulf.oil.spill/index.html#comment-50598247 I have a regular problem with CNN.com. I post comments there. People sometimes reply to them. I check my profile, and see the number of replies, but I can't read them there. I have to follow the link to the original article. The fist set of comments are at the bottom, with a 'load next 25' link at the bottom. There are often hundreds of comments, and sometimes a few thousand. There is no practical way for me to read the replies to my comments. If there's under around 300 hundred, I'll just click that link enough times to see what the replies to my comments are. I need a script/plugin to select that 'load next 25' link until that link is no longer present on the page. Then I could just search for my userid and read the responses.

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • C++ Builder 2010 Exception Dead Lock ???

    - by James
    Hello Is this some kind of exception dead lock i am facing? How to avoid it ? Have a look at below line where i have TIdContext objects of connected clients stored in an objlist and at times i need to process it. But if one user is disconnected while another thread is processing the list, then for that freed TIdContext-Data object I am getting Access voilation, Ok its fine i am using try/catch but problem is that at below line there is some kind of dead lock and process hangs , if i attach a debuger it show Access voilation Again and Again and Again, and cpu coonsumption goes up because of that exception dead lock. AnsiString UserID = ((Tmyobject*) ((TIdContext*) ObjList->Objects[i])->Data)->UserID; i know i can check before accessing the object, if object is not Null, It works.. But my question is what if once in a blue moon the Data object is freed at the point when NULL check is performed and on next line when again i am accessing the object i get same dead lock ??? So how to avoid/handle this dead lock exception ? Here is the call stack... :005F07C0 System::AnsiStringBase::AnsiStringBase(this=:0285FCE0, src=????) :0040223F System::AnsiStringT<0>::AnsiStringT<0>(this=:0285FCE0, src=:00000008) :00457996 TSomeClass::SomeFunction(this=:009D8230, UserID={ }, DataSize={ }, ) :0047BFF1 __linkproc__ ThreadProc(Thread=:009561C0) :004AD00E __linkproc__ ThreadWrapper(Parameter=:009EAA30) :7c80b729 ; C:\WINDOWS\system32\kernel32.dll Please helppppppppppppppppppppp Thanks

    Read the article

  • git-diff to ignore ^M

    - by neoneye
    In a project where some of the files contains ^M as newline separators. Diffing these files are apparently impossible, since git-diff sees it as the entire file is just a single line. How does one diff with the previous version? Is there an option like "treat ^M as newline when diffing" ? prompt> git-diff "HEAD^" -- MyFile.as diff --git a/myproject/MyFile.as b/myproject/MyFile.as index be78321..a393ba3 100644 --- a/myproject/MyFile.cpp +++ b/myproject/MyFile.cpp @@ -1 +1 @@ -<U+FEFF>import flash.events.MouseEvent;^Mimport mx.controls.*;^Mimport mx.utils.Delegate \ No newline at end of file +<U+FEFF>import flash.events.MouseEvent;^Mimport mx.controls.*;^Mimport mx.utils.Delegate \ No newline at end of file prompt> UPDATE: now I have written a script that checks out the latest 10 revisions and converts CR to LF. require 'fileutils' if ARGV.size != 3 puts "a git-path must be provided" puts "a filename must be provided" puts "a result-dir must be provided" puts "example:" puts "ruby gitcrdiff.rb project/dir1/dir2/dir3/ SomeFile.cpp tmp_somefile" exit(1) end gitpath = ARGV[0] filename = ARGV[1] resultdir = ARGV[2] unless FileTest.exist?(".git") puts "this command must be run in the same dir as where .git resides" exit(1) end if FileTest.exist?(resultdir) puts "the result dir must not exist" exit(1) end FileUtils.mkdir(resultdir) 10.times do |i| revision = "^" * i cmd = "git show HEAD#{revision}:#{gitpath}#{filename} | tr '\\r' '\\n' > #{resultdir}/#{filename}_rev#{i}" puts cmd system cmd end

    Read the article

  • How to use named_scope to incrementally construct a complex query?

    - by wbharding
    I want to use Rails named_scope to incrementally build complex queries that are returned from external methods. I believe I can get the behavior I want with anonymous scopes like so: def filter_turkeys Farm.scoped(:conditions => {:animal => 'turkey'}) end def filter_tasty Farm.scoped(:conditions => {:tasty => true}) end turkeys = filter_turkeys is_tasty = filter_tasty tasty_turkeys = filter_turkeys.filter_tasty But say that I already have a named_scope that does what these anonymous scopes do, can I just use that, rather than having to declare anonymous scopes? Obviously one solution would be to pass my growing query to each of the filter methods, a la def filter_turkey(existing_query) existing_query.turkey # turkey is a named_scoped that filters for turkey end But that feels like an un-DRY way to solve the problem. Oh, and if you're wondering why I would want to return named_scope pieces from methods, rather than just build all the named scopes I need and concatenate them together, it's because my queries are too complex to be nicely handled with named_scopes themselves, and the queries get used in multiple places, so I don't want to manually build the same big hairy concatenation multiple times.

    Read the article

  • Database.ExecuteNonQuery does not return

    - by dan-waterbly
    I have a very odd issue. When I execute a specific database stored procedure from C# using SqlCommand.ExecuteNonQuery, my stored procedure is never executed. Furthermore, SQL Profiler does not register the command at all. I do not receive a command timeout, and no exeception is thrown. The weirdest thing is that this code has worked fine over 1,200,000 times, but for this one particular file I am inserting into the database, it just hangs forever. When I kill the application, I receive this error in the event log of the database server: "A fatal error occurued while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0). Which makes me think that the database server is receiving the command, though SQL Profiler says otherwise. I know that the appropiate permissions are set, and that the connection string is right as this piece of code and stored procedure works fine with other files. Below is the code that calls the stored procedure, it may be important to note that the file I am trying to insert is 33.5MB, but I have added more than 10,000 files larger than 500MB, so I do not think the size is the issue: using (SqlConnection sqlconn = new SqlConnection(ConfigurationManager.ConnectionStrings["TheDatabase"].ConnectionString)) using (SqlCommand command = sqlconn.CreateCommand()) { command.CommandText = "Add_File"; command.CommandType = CommandType.StoredProcedure; command.CommandTimeout = 30 //should timeout in 30 seconds, but doesn't... command.Parameters.AddWithValue("@ID", ID).SqlDbType = SqlDbType.BigInt; command.Parameters.AddWithValue("@BinaryData", byteArr).SqlDbType = SqlDbType.VarBinary; command.Parameters.AddWithValue("@FileName", fileName).SqlDbType = SqlDbType.VarChar; sqlconn.Open(); command.ExecuteNonQuery(); } There is no firewall between the server making the call and the database server, and the windows firewalls have been disabled to troubleshoot this issue.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • Javascript to fire event when a key pressed on the Ajax Toolkit Combo box.

    - by Paul Chapman
    I have the following drop down list which is using the Ajax Toolkit to provide a combo box <cc1:ComboBox ID="txtDrug" runat="server" style="font-size:8pt; width:267px;" Font-Size="8pt" DropDownStyle="DropDownList" AutoCompleteMode="SuggestAppend" AutoPostBack="True" ontextchanged="txtDrug_TextChanged" /> Now I need to load this up with approx 7,000 records which takes a considerable time, and effects the response times when the page is posted back and forth. The code which loads these records is as follows; dtDrugs = wsHelper.spGetAllDrugs(); txtDrug.DataValueField = "pkDrugsID"; txtDrug.DataTextField = "drugName"; txtDrug.DataSource = dtDrugs; txtDrug.DataBind(); However if I could get an event to fire when a letter is typed instead of having to load 7000 records it is reduced to less than 50 in most instances. I think this can be done in Javascript. So the question is how can I get an event to fire such that when the form starts there is nothing in the drop down, but as soon as a key is pressed it searches for those records starting with that letter. The .Net side of things I'm sure about - it is the Javascript I'm not. Thanks in advance

    Read the article

  • Extremely CPU Intensive Alarm Clock

    - by SoulBeaver
    For some reason my program, a console alarm clock I made for laughs and practice, is extremely CPU intensive. It consumes about 2mB RAM, which is already quite a bit for such a small program, but it devastates my CPU with over 50% resources at times. Most of the time my program is doing nothing except counting down the seconds, so I guess this part of my program is the one that's causing so much strain on my CPU, though I don't know why. If it is so, could you please recommend a way of making it less, or perhaps a library to use instead if the problem can't be easily solved? /* The wait function waits exactly one second before returning to the * * called function. */ void wait( const int &seconds ) { clock_t endwait; // Type needed to compare with clock() endwait = clock() + ( seconds * CLOCKS_PER_SEC ); while( clock() < endwait ) {} // Nothing need be done here. } In case anybody browses CPlusPlus.com, this is a genuine copy/paste of the clock() function they have written as an example for clock(). Much why the comment //Nothing need be done here is so lackluster. I'm not entirely sure what exactly clock() does yet. The rest of the program calls two other functions that only activate every sixty seconds, otherwise returning to the caller and counting down another second, so I don't think that's too CPU intensive- though I wouldn't know, this is my first attempt at optimizing code. The first function is a console clear using system("cls") which, I know, is really, really slow and not a good idea. I will be changing that post-haste, but, since it only activates every 60 seconds and there is a noticeable lag-spike, I know this isn't the problem most of the time. The second function re-writes the content of the screen with the updated remaining time also only every sixty seconds. I will edit in the function that calls wait, clearScreen and display if it's clear that this function is not the problem. I already tried to reference most variables so they are not copied, as well as avoid endl as I heard that it's a little slow compared to \n.

    Read the article

  • Sort command not working as expected

    - by user964689
    If anybody can help me to write a loop to iterate over files in a folder it would save me a huge amount of time. I think it must be quite a simple solution ,but I currently don't know how to nest a loop within a loop. So far I have this script: cd /folderlocation/ for i in `</textfile_containing_lines_to_iterate_through` do #size=`echo $i | perl -nE '/:([\d-]+)/ && say abs(eval $1)'` #echo "$size" zcat dataset | head -n 18 > temp"$i".vcf tabix dataset $i >> temp"$i".vcf vcftools --window-pi 1000000 --vcf temp10individuals"$i".vcf >> run_summary.txt cat out.windowed.pi >> outputfile_2 #rm temp* done grep -v "PI" outputfile_2 > outputfile rm outputfile_2 I need to expand this so that the script will run multiple times, through all of the 'textfiles_containing_lines_to_iterate_through'. Currently I change the name of the textfile manually each time and re-run the script. So I'd need a loop that does this for file in folder, and also that uses the name of the file as part of the outputfile name so that I can match an output file to an inputfile. Any help would be really useful and greatly appreciated! Many thanks in advance.

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >