Search Results

Search found 6207 results on 249 pages for 'slow'.

Page 216/249 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Developing ASP.Net User Control to be imported to SharePoint MOSS 2007

    - by Don Kirkham
    Apologies if this has been answered, but I could not find a similar question: I am developing a webpart for MOSS 2007. I am using WSPBuilder to built a visual webpart (ascx) and everything works fine, but the development/debug cycle is just painfully slow, so I'd like to know if it is possible (without being too painful) to develop the user control faster using an .Net Web Application project with all of the nice F5 debugging, then import the final product into my SharePoint visual webpart. The user control interacts with a LOB system (SQL) and does not reference the SharePoint API at all. (The reason I am building this as a webpart is because I don't need another web app to run this one page, so putting it into a webpart on a new webpart page on my existing site is the best solution IMO.) I would obviously need to import (reference?) my data access classes into my "temp" web app, but think that would not be too much trouble. I realize this will be extra effort to get this set up, but am thinking the payoff will be reduced development time of the actual user control using a little web application vs having to use the compile/build WSP/deploy WSP/reset ISS/test/make a change/repeat cycle that MOSS requires. (I guess SP2010/VS2010 has spoiled me with the native SharePoint tools available.)

    Read the article

  • implementing gravity to projectile - delta time issue

    - by Murat Nafiz
    I'm trying to implement a simple projectile motion in Android (with openGL). And I want to add gravity to my world to simulate a ball's dropping realistically. I simply update my renderer with a delta time which is calculated by: float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); In my screen.update(deltaTime) method: if (isballMoving) { golfBall.updateLocationAndVelocity(deltaTime); } And in golfBall.updateLocationAndVelocity(deltaTime) method: public final static double G = -9.81; double vz0 = getVZ0(); // Gets initial velocity(z) double z0 = getZ0(); // Gets initial height double time = getS(); // gets total time from act begin double vz = vz0 + G * deltaTime; // calculate new velocity(z) double z = z0 - vz0 * deltaTime- 0.5 * G * deltaTime* deltaTime; // calculate new position time = time + deltaTime; // Update time setS(time); //set new total time Now here is the problem; If I set deltaTime as 0.07 statically, then the animation runs normally. But since the update() method runs as faster as it can, the length and therefore the speed of the ball varies from device to device. If I don't touch deltaTime and run the program (deltaTime's are between 0.01 - 0.02 with my test devices) animation length and the speed of ball are same at different devices. But the animation is so SLOW! What am I doing wrong?

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • How can I move TinyMCE's toolbar into a modal popup?

    - by Nate Wagar
    I'm using TinyMCE & jQuery and am having a problem moving TinyMCE's external toolbar to another location in the DOM. To further complicate things, there are multiple TinyMCE instances on the page. I only want the toolbar for the one that's currently being edited. Here's some sample code: $(textarea).tinymce({ setup: function(ed) {setupMCEToolbar(ed, componentID, displaySettingsPane)} ,script_url: '/clubs/data/shared/scripts/tiny_mce/tiny_mce.js' ,theme : "advanced" ,plugins : "safari,pagebreak,style,layer,table,save,advhr,advimage,advlink,emotions,iespell,inlinepopups,insertdatetime,preview,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,nonbreaking,xhtmlxtras,template" ,theme_advanced_buttons1 : "save,newdocument,|,bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,styleselect,formatselect,fontselect,fontsizeselect" ,theme_advanced_buttons2 : "cut,copy,paste,pastetext,pasteword,|,search,replace,|,bullist,numlist,|,outdent,indent,blockquote,|,undo,redo,|,link,unlink,anchor,image,cleanup,help,code,|,insertdate,inserttime,preview,|,forecolor,backcolor" ,theme_advanced_buttons3 : "tablecontrols,|,hr,removeformat,visualaid,|,sub,sup,|,charmap,emotions,iespell,media,advhr,|,print,|,ltr,rtl,|,fullscreen" ,theme_advanced_buttons4 : "insertlayer,moveforward,movebackward,absolute,|,styleprops,|,cite,abbr,acronym,del,ins,attribs,|,visualchars,nonbreaking,template,pagebreak" ,theme_advanced_toolbar_location : "external" ,theme_advanced_toolbar_align : "left" ,theme_advanced_statusbar_location : "bottom" ,theme_advanced_resizing : true }); var setupMCEToolbar = function (mce, componentID, displaySettingsPane) { mce.onClick.add(function(ed,e){ displaySettingsPane($('#' + componentID)); $('#' + componentID).fetch('.mceExternalToolbar').eq(0).appendTo('#settingsPaneContent'); }); } Basically, it seems as though the setupMCEToolbar function cannot track down the mceExternalToolbar to move it. Has anyone ever had success trying to do something like this? EDIT It's a Monday alright... it couldn't find the external toolbar because I was using children() instead of fetch(). There's still an issue in that: 1) Moving it is incredibly slow and 2) Once it moves, TinyMCE breaks. EDIT 2 A bit more clarification: The modal is draggable, thus making any purely-CSS workarounds a bit awkward...

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • Word forms with too many ActiveX checkboxes load slowly.

    - by Luke
    Hi there, my company's software product has a feature that allows users to generate forms from Word templates. The program auto fills some fields from the SQL database and the user can fill in other data that they desire. So we have a .dotx template that holds the design of the form, and then the user gets the .docx file to fill out when they call it from our program. The problem we're having is that some of our users have been finding that the forms take an exceptionally long time to open up and then, once open, are so slow to respond (scroll around, etc) that they're unusable. So in my investigations so far, I've found out that the problem systems are one with lower powered CPUs (unfortunately it happens for systems above our system requirements) and the Word forms that cause the problems are ones with large amount of ActiveX style checkboxes on them. I verified that reducing the ActiveX checkboxes fixes the form loading problems. So I have the following questions about solutions (we're using Word 2007): 1) Is there any way to configure Word, or some other settings, so that there won't be such a strain opening a Word form with lots of ActiveX checkboxes? Any way of speeding up Word's opening? 2) Using Legacy style checkboxes instead of the ActiveX ones makes the forms load fine, but it looks like the user has to double-click the checkbox and change Default Value-Checked. Is there a way to configure it so that they can simply click on the checkbox to tick it? "Legacy Forms" checkbox as a name kind of worries me (Legacy…), does that mean a future version of word at some point wouldn't load the checkboxes because they're "legacy"? 3) Yes, it became clear to me after a little bit of research into solutions that Word is not the tool for the job for forms like I'm describing. InfoPath seems to be exactly what we should have been using all along but unfortunately I wasn't involved in the decision making or development of these forms, just tasked with coming up with a solution. I'd appreciate answers to any of these, or if anyone has any other ideas for solutions to this problem. Thanks

    Read the article

  • Convert image color space and output separate channels in OpenCV

    - by Victor May
    I'm trying to reduce the runtime of a routine that converts an RGB image to a YCbCr image. My code looks like this: cv::Mat input(BGR->m_height, BGR->m_width, CV_8UC3, BGR->m_imageData); cv::Mat output(BGR->m_height, BGR->m_width, CV_8UC3); cv::cvtColor(input, output, CV_BGR2YCrCb); cv::Mat outputArr[3]; outputArr[0] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Y->m_imageData); outputArr[1] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cr->m_imageData); outputArr[2] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cb->m_imageData); split(output,outputArr); But, this code is slow because there is a redundant split operation which copies the interleaved RGB image into the separate channel images. Is there a way to make the cvtColor function create an output that is already split into channel images? I tried to use constructors of the _OutputArray class that accepts a vector or array of matrices as an input, but it didn't work.

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

  • Django + jquery : getting 301

    - by llazzaro
    Hello, I have tabs that calls via javascript urls of django to complete the "container" But i am getting 301, any idea why this is happening? Server misconfiguration? urls.py urlpatterns = patterns('', (r'^admin/', include(admin.site.urls)), (r'^list/', 'carsproj.cars.views.list'), ) view def list(request): if request.is_ajax(): return render_to_response('templates/generic_list.html', { 'items' : Cars.objects.all(), 'name' : 'List - Cars' }, context_instance = RequestContext(request)) javascript the_tabs.click(function(e){ var element = $(this); if(element.find('#overLine').length) return false; var bg = element.attr('class').replace('tab ',''); $('#overLine').remove(); $('<div>',{ id:'overLine', css:{ display:'none', width:element.outerWidth()-2, background:topLineColor[bg] || 'white' }}).appendTo(element).fadeIn('slow'); if(!element.data('cache')) { $('#contentHolder').html('<img src="/media/img/ajax_preloader.gif" width="64" height="64" class="preloader" />'); $.get(element.data('page'),function(msg){ $('#contentHolder').html(msg); element.data('cache',msg); }); } else $('#contentHolder').html(element.data('cache')); e.preventDefault(); }) Please tell me what more information you need, js code? template? url.py? I WILL EDIT THIS POST FOR ADD MORE DATA

    Read the article

  • How to create a generic list in this wierd case in c#

    - by Marc Bettex
    Hello, In my program, I have a class A which is extended by B, C and many more classes. I have a method GetInstance() which returns a instance of B or C (or of one of the other child), but I don't know which one, so the return type of the method is A. In the method CreateGenericList(), I have a variable v of type A, which is in fact either a B, a C or another child type and I want to create a generic list of the proper type, i.e. List<B> if v is a B or List<C> if v is a C, ... Currently I do it by using reflection, which works, but this is extremely slow. I wanted to know if there is another way to to it, which doesn't use reflection. Here is an example of the code of my problem: class A { } class B : A { } class C : A { } // More childs of A. class Program { static A GetInstance() { // returns an instance of B or C } static void CreateGenericList() { A v = Program.GetInstance(); IList genericList = // Here I want an instance of List<B> or List<C> or ... depending of the real type of v, not a List<A>. } } I tried the following hack. I call the following method, hoping the type inferencer will guess the type of model, but it doesn't work and return a List<A>. I believe that because c# is statically typed, T is resolved as A and not as the real type of model at runtime. static List<T> CreateGenericListFromModel<T>(T model) where T : A { return new List<T> (); } Does anybody have a solution to that problem that doesn't use reflection or that it is impossible to solve that problem without reflection? Thank you very much, Marc

    Read the article

  • What is the fastest way to do division in C for 8bit MCUs?

    - by Jordan S
    I am working on the firmware for a device that uses an 8bit mcu (8051 architecture). I am using SDCC (Small Device C Compiler). I have a function that I use to set the speed of a stepper motor that my circuit is driving. The speed is set by loading a desired value into the reload register for a timer. I have a variable, MotorSpeed that is in the range of 0 to 1200 which represents pulses per second to the motor. My function to convert MotorSpeed to the correct 16bit reload value is shown below. I know that float point operations are pretty slow and I am wondering if there is a faster way of doing this... void SetSpeed() { float t = MotorSpeed; unsigned int j = 0; t = 1/t ; t = t / 0.000001; j = MaxInt - t; TMR3RL = j; // Set reload register for desired freq return; }

    Read the article

  • What is the best practice for including third party jar files in a Java program?

    - by ZoFreX
    I have a program that needs several third-party libraries, and at the moment it is packaged like so: zerobot.jar (my file) libs/pircbot.jar libs/mysql-connector-java-5.1.10-bin.jar libs/c3p0-0.9.1.2.jar As far as I know the "best" way to handle third-party libs is to put them on the classpath in the manifest of my jar file, which will work cross-platform, won't slow down launch (which bundling them might) and doesn't run into legal issues (which repackaging might). The problem is for users who supply the third party libraries themselves (example use case, upgrading them to fix a bug). Two of the libraries have the version number in the file, which adds hassle. My current solution is that my program has a bootstrapping process which makes a new classloader and instantiates the program proper using it. This custom classloader adds all .jar files in lib/ to its classpath. My current way works fine, but I now have two custom classloaders in my application and a recent change to the code has caused issues that are difficult to debug, so if there is a better way I'd like to remove this complexity. It also seems like over-engineering for what I'm sure is a very common situation. So my question is, how should I be doing this?

    Read the article

  • Simplification / optimization of GPS track

    - by GreyCat
    I've got a GPS track, produces by gpxlogger(1) (supplied as a client for gpsd). GPS receiver updates its coordinates every 1 second, gpxlogger's logic is very simple, it writes down location (lat, lon, ele) and a timestamp (time) received from GPS every n seconds (n = 3 in my case). After writing down a several hours worth of track, gpxlogger saves several megabyte long GPX file that includes several thousands of points. Afterwards, I try to plot this track on a map and use it with OpenLayers. It works, but several thousands of points make using the map a sloppy and slow experience. I understand that having several thousands of points of suboptimal. There are myriads of points that can be deleted without losing almost anything: when there are several points making up roughly the straight line and we're moving with the same constant speed between them, we can just leave the first and the last point and throw anything else. I thought of using gpsbabel for such track simplification / optimization job, but, alas, it's simplification filter works only with routes, i.e. analyzing only geometrical shape of path, without timestamps (i.e. not checking that the speed was roughly constant). Is there some ready-made utility / library / algorithm available to optimize tracks? Or may be I'm missing some clever option with gpsbabel?

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • C# / Silverlight / WPF / Fast rendering lots of circles

    - by Walt W
    I want to render a lot of circles or small graphics within either silverlight or wpf (around 1000-10000) as fast and as frequently as possible. If I have to go to DX or OGL, that's fine, but I'm wondering about doing this within either of those two frameworks first (read: it's OK if an answer is WPF-only or Silverlight-only). Also, if there is a way to access DX through WPF and render on a surface that way, I would be interested in that as well. So, what's the fastest way to draw a load of circles? They can be as plain as necessary, but they do need to have a radius. Currently I'm using DrawingVisual and a DrawingContext.DrawEllipse() command for each circle, then rendering the visual to a RenderTargetBItmap, but it becomes very slow as the number of circles rises. By the way, these circles move every frame, so caching isn't really an option unless you're going to suggest caching the individual circles . . . But their sizes are dynamic, so I'm not sure that's a great approach.

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • How do I create rows with alternating colors for a UITableView on iPhone?

    - by Mat
    Hi all, i would to have alternate 2 colors of rows, like the first black, the second white, the third black, etc, etc... my approach is like a basic exercise of programming to calculate if a number is odd number or not: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; cell = ((MainCell*)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]); if (cell==nil) { NSArray *topLevelObjects=[[NSBundle mainBundle] loadNibNamed:@"MainCell" owner:self options:nil]; for (id currentObject in topLevelObjects){ if ([currentObject isKindOfClass:[UITableViewCell class]]){ if ((indexPath.row % 2)==0) { [cell.contentView setBackgroundColor:[UIColor purpleColor]]; }else{ [cell.contentView setBackgroundColor:[UIColor whiteColor]]; } cell = (MainCell *) currentObject; break; } } }else { AsyncImageView* oldImage = (AsyncImageView*) [cell.contentView viewWithTag:999]; [oldImage removeFromSuperview]; }return cell; The problem is that when i do a rapid scroll, the background of cells become like the last 2 cell black, the first 2 cell white or something like this, but if i scroll slow works fine. I think the problem is the cache of reusableCell. Any ideas? TIA

    Read the article

  • Developing on a windows machine that interacts with a linux system

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • jQuery: Trouble with draggable across cloned elements

    - by Rosarch
    I'm trying to implement: User drags a draggable li to a droppable li. The original li is no longer draggable A new li is cloned from the original li, and is appended to the droppable li. I can't get it to work. function moveToTerm(original_course, helper, term) { var cloned_course = original_course.clone(true); original_course.addClass('already-scheduled'); original_course.draggable('disable'); cloned_course.draggable(); cloned_course.appendTo(term).hide().fadeIn('slow'); } This works fine, except now the cloned_course is not draggable. A droppable li: <li class="term ui-droppable"> <strong>Fall 2010</strong> <li class="course">Computing Cultures</li> <!-- this course was just dropped. I want it to be draggable but it's not --> <li class="course ui-draggable" style="display: list-item;">New Media and Society</li> </li> What am I doing wrong?

    Read the article

  • How to parallelize this groovy code?

    - by lucas
    I'm trying to write a reusable component in Groovy to easily shoot off emails from some of our Java applications. I would like to pass it a List, where Email is just a POJO(POGO?) with some email info. I'd like it to be multithreaded, at least running all the email logic in a second thread, or make one thread per email. I am really foggy on multithreading in Java so that probably doesn't help! I've attempted a few different ways, but here is what I have right now: void sendEmails(List<Email> emails) { def threads = [] def sendEm = emails.each{ email -> def th = new Thread({ Random rand = new Random() def wait = (long)(rand.nextDouble() * 1000) println "in closure" this.sleep wait sendEmail(email) }) println "putting thread in list" threads << th } threads.each { it.run() } threads.each { it.join() } } I was hoping the sleep would randomly slow some threads down so the console output wouldn't be sequential. Instead, I see this: putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list in closure sending email1 in closure sending email2 in closure sending email3 in closure sending email4 in closure sending email5 in closure sending email6 in closure sending email7 in closure sending email8 in closure sending email9 in closure sending email10 sendEmail basically does what you'd expect, including the println statement, and the client that calls this follows, void doSomething() { Mailman emailer = MailmanFactory.getExchangeEmailer() def to = ["one","two"] def from = "noreply" def li = [] def email (1..10).each { email = new Email(to,null,from,"email"+it,"hello") li << email } emailer.sendEmails li }

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Is it good choice to move a Sublayer around a view using UIPanGestureRecognizer?

    - by Pranjal Bikash Das
    I have a CALayer and as sublayer to this CALayer i have added an imageLayer which contains an image of resolution 276x183. I added a UIPanGestureRecognizer to the main view and calculation the coordinates of the CALayer as follows: - (void)panned:(UIPanGestureRecognizer *)sender{ subLayer.frame=CGRectMake([sender locationInView:self.view].x-138, [sender locationInView:self.view].y-92, 276, 183); } in viedDidLoad i have: subLayer.backgroundColor=[UIColor whiteColor].CGColor; subLayer.frame=CGRectMake(22, 33, 276, 183); imageLayer.contents=(id)[UIImage imageNamed:@"A.jpg"].CGImage; imageLayer.frame=subLayer.bounds; imageLayer.masksToBounds=YES; imageLayer.cornerRadius=15.0; subLayer.shadowColor=[UIColor blackColor].CGColor; subLayer.cornerRadius=15.0; subLayer.shadowOpacity=0.8; subLayer.shadowOffset=CGSizeMake(0, 3); [subLayer addSublayer:imageLayer]; [self.view.layer addSublayer:subLayer]; It is giving desired output but a bit slow in the simulator. I have not yet tested it in Device. so my question is - Is it OK to move a CALayer containing an image??

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Why does Android allocate more memory than needed when loading images

    - by Simon
    Folks, I don't think that this is a duplicate and is NOT one of those how do I avoid OOMs questions. This is a genuine quest for knowledge so hold off on those down votes please... Imagine I have a JPEG of 500x500 pixels. I load it as ARGB_8888 which is as "bad as it gets". I would expect Android to allocate 500x500x4 bytes = a little under 1MB however, look at a heap dump and you will see that Android allocates significantly more, often factors of 5-10 times greater. You frequently see questions on here about OOMS where the stack trace shows a heap request of say 15MB and it is ALWAYS much larger than is required simply to hold the bytes of the image. The OP usually catches some downvotes then is bombarded with stock answers and comments about using less memory (thanks Romain!) and in scaling. I think there is more than meets the eye here. Anybody know why this is? If there is no apparent answer, I will put together an SSCCE if it helps. PS. I assume that JPEG vs PNG etc is irrelevant since we're talking about the memory usage of the backing bitmap which is simply x times y times BPP - or am I being slow?

    Read the article

  • Database design: Calculating the Account Balance

    - by 001
    How do I design the database to calculate the account balance? 1) Currently I calculate the account balance from the transaction table In my transaction table I have "description" and "amount" etc.. I would then add up all "amount" values and that would work out the user's account balance. I showed this to my friend and he said that is not a good solution, when my database grows its going to slow down???? He said I should create separate table to store the calculated account balance. If did this, I will have to maintain two tables, and its risky, the account balance table could go out of sync. Any suggestion? EDIT: OPTION 2: should I add an extra column to my transaction tables "Balance". now I do not need to go through many rows of data to perform my calculation. Example John buys $100 credit, he debt $60, he then adds $200 credit. Amount $100, Balance $100. Amount -$60, Balance $40. Amount $200, Balance $240.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >