Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 464/527 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Custom UIView using CALayers disappears after 180º rotation or navigation controller pop

    - by Steve Madsen
    I have a created a custom UIView subclass that is exhibiting some strange behavior. It is a spinning wheel selector, and for performance reasons it is drawn entirely into two CALayer instances. The bottom layer is the wheel itself, which is rotated using setAffineTransform: according to touches. The top layer is eye candy. drawRect: is fairly simple. If the control hasn't been drawn yet (or it's been invalidated), it calls a method that creates the images and assigns them to the layer contents property. - (void) drawRect:(CGRect)rect { if (imageLayer == nil) { [self drawIntoImageLayer]; } [self updateWheelRotation]; } When the view controller using this view first appears, everything is fine. There are two instances where the view completely disappears, however: If the device is rotated a full 180°. After a view controller is popped off the navigation stack and the view becomes visible again. drawRect: is not called either time. Interestingly enough, it IS called after a 90° orientation change, and that causes the view to re-appear. How can I ensure that a custom view using CALayers is redrawn properly in these situations?

    Read the article

  • In .NET, What Is Fastest Way to Initialize Multi-Dimensional Array to Non-Default Value

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few tens-of-thousands to several million times. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; }

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Python socket error on UDP data receive. (10054)

    - by Charles
    I currently have a problem using UDP and Python socket module. We have a server and clients. The problem occurs when we send data to a user. It's possible that user may have closed their connection to the server through a client crash, disconnect by ISP, or some other improper method. As such, it is possible to send data to a closed socket. Of course with UDP you can't tell if the data really reached or if it's closed, as it doesn't care (atleast, it doesn't bring up an exception). However, if you send data and it is closed off, you get data back somehow (???), which ends up giving you a socket error on sock.recvfrom. [Errno 10054] An existing connection was forcibly closed by the remote host. Almost seems like an automatic response from the connection. Although this is fine, and can be handled by a try: except: block (even if it lowers performance of the server a little bit). The problem is, I can't tell who this is coming from or what socket is closed. Is there anyway to find out 'who' (ip, socket #) sent this? It would be great as I could instantly just disconnect them and remove them from the data. Any suggestions? Thanks.

    Read the article

  • SQL to get list of dates as well as days before and after without duplicates

    - by Nathan Koop
    I need to display a list of dates, which I have in a table SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 10, 2010 - 1 No problem. However, I now need to display the date before and the date after as well with a different DateType. Dec 31, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 I thought I could use a union SELECT MyDate, DateType FROM ( SELECT mydate - 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate + 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; ) AS myCombinedDateTable This however includes duplicates of the original dates. Dec 31, 2009 - 2 Jan 1, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 2 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 How can I best remove these duplicates? I am considering a temporary table, but am unsure if that is the best way to do it. This also appears to me that it may provide performance issues as I am running the same query three separate times. What would be the best way to handle this request?

    Read the article

  • How to react when asked a question you already know during an interview

    - by DevNull
    The short story:- If you are asked a tough algorithmic/puzzle question during an interview, whose solution is already known to you, do you:- Honestly tell the interviewer that you know this question already? -- this could result in bursting the interviewer's ego and him increasing the complexity level of the subsequent questions. Do an Oscar deserving performance and act as if you are thinking and trying hard and slowly getting to the solution? -- depending on your acting skills, could majorly impress the interviewer making the rest of the interview easier. Long story:- OK, this question comes as a result of what happened to me in a recent telephonic interview that I gave - the interview was supposed to be all algorithmic. The interviewer started with an algorithmic question which I had luckily already seen here on Stackoverflow. The best solution to that problem is not very intuitive and is more of a you-get-it-if-you-know-it kind. Now, just to not disappoint the interviewer too much, I took a few seconds as if I was pondering on the problem and then blurted out the answer which I knew too well having read and admired it on SO already. But I guess that gave it away to the interviewer that I already knew this question and since then, he started asking me for more efficient solutions and I kept coming up with approaches (even if not correct or more efficient, but I did touch a lot of different data structures and algos) and he kept asking for more efficient solutions and generally seemed put off by my initial salvo which was unexpected. What should I have done? Cheers!

    Read the article

  • Mysql Server Optimization

    - by Ish Kumar
    Hi Geeks, We are having serious MySQL(InnoDB) performance issues at a moment when we do: (10-20) insertions on TABLE1 (10-20) updates on TABLE2 Note: Both above operations happens within fraction of a second. And this occurs every few (10-15) minutes. And all online users (approx 400-600) doing read operation on join of TABLE1 & TABLE2 every 1 second. Here is our mysql configuration info: http://docs.google.com/View?id=dfrswh7c_117fmgcmb44 Issues: Lot queries wait and expire later (saw it from phpmyadmin / processes). My poor MySQL server crashes sometimes Questions Q1: Any suggestions to optimize at MySQL level? Q2: I thinking to use persistent connections at application level, is it right? Info Added Later: Database Engine: InnoDB TABLE1 : 400,000 rows (inserting 8,000 daily) & TABLE2: 8,000 rows 1 second query: SELECT b.id, b.user_id, b.description, b.debit, b.created, b.price, u.username, u.email, u.mobile FROM TABLE1 b, TABLE2 u WHERE b.credit = 0 AND b.user_id = u.id AND b.auction_id = "12345" ORDER BY b.id DESC LIMIT 10; // there are few more but they are not so critical. Indexing is good, we are using them wisely. In above query all id's are indexed And TABLE1 has frequent insertions and TABLE2 has frequent updates.

    Read the article

  • Java SSH2 libraries in depth: Trilead/Ganymed/Orion [/other?]

    - by Bernd Haug
    I have been searching for a pure Java SSH library to use for a project. The single most important needed feature is that it has to be able to work with command-line git, but remote-controlling command-line tools is also important. A pretty common choice, e.g. used in the IntelliJ IDEA git integration (which works very well), seems to be Trilead SSH2. Looking at their website, it's not being maintained any more. Trilead seems to have been a fork of Ganymed SSH2, which was a ETH Zurich project that didn't see releases for a while, but had a recent release by its new owner, Christian Plattner. There is another actively maintained fork from that code base, Orion SSH, that saw an even more recent release, but which seems to get mentioned online much less than the other 2 forks. Has anybody here worked with any of (or, if possible, both) of Ganymed and Orion and could kindly describe the development experience with either/both? Accuracy of documentation [existence of documentation?], stability, buggyness... - all of these would be highly interesting to me. Performance is not so important for my current project. If there is another pure-Java SSH implementation that should be used instead, please feel free to mention it, but please don't just mention a name...describe your judgment from actual experience. Sorry if this question may seem a bit "do my homework"-y, but I've really searched for reviews. Everything out there seems to be either a listing of implementations or short "use this! it's great!" snippets.

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • When are SQL views appropriate in ASP.net MVC?

    - by sslepian
    I've got a table called Protocol, a table called Eligibility, and a Protocol_Eligibilty table that maps the two together (a many to many relationship). If I wanted to make a perfect copy of an entry in the Protocol table, and create all the needed mappings in the Protocol_Eligibility table, would using an SQL view be helpful, from a performance standpoint? Protocol will have around 1000 rows, Eligibility will have about 200, and I expect each Protocol to map to about 10 Eligibility rows and each Eligibility to map to over 100 rows in Protocol. Here's how I'm doing this with the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility_View where pel.pid == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = (from pel in _documentDataModel.Eligibility where pel.ID == pel_item.eid select pel).First(); newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); } And this is without the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility where pel.Protocol.ID == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { pel_item.EligibilityReference.Load(); newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = pel_item.Eligibility; newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); }

    Read the article

  • How to lazy load scripts in YUI that accompany ajax html fragments

    - by Chris Beck
    I have a web app with Tabs for Messages and Contacts (think gmail). Each Tab has a Y.on('click') event listener that retrieves an HTML fragment via Y.io() and renders the fragment via node.setContent(). However, the Contact Tab also requires contact.js to enable an Edit button in the fragment. How do I defer the cost of loading contact.js until the user clicks on the Contacts tab? How should contact.js add it's listener to the Edit button? The Complete function of my Tab's on('click') event could serialize Get.script('contact.js') after Y.io('fragment'). However, for better performance, I would prefer to download the script in parallel to downloading the HTML fragment. In that case, I would need to defer adding an event listener to the Edit button until the Edit button is available. This seems like a common RIA design pattern. What is the best way to implement this with YUI? How should I get the script? How should I defer sections of the script until elements in the fragment are available in the DOM?

    Read the article

  • How/when to hire new programmers, and how to integrate them?

    - by Shaul
    Hiring new programmers, especially in a small company, can often present a Catch-22 situation. We have too much work to do, so we need to hire new programmers. But we can't hire new programmers now, because they will need mentoring and several months of learning curve in your industry/product/environment before they're useful, and none of the programmers has time to be a mentor to a new programmer, because they're all completely swamped with the current work load. That may be a slightly frivolous way of describing the situation, but nevertheless, it's difficult for a small company on a tight budget to justify hiring someone who is not only going to be unproductive for a long time, but will also take away from the performance of the current programmers. How have you dealt with this kind of situation? When is the best time to hire someone? What are the best tasks to assign to a new team member so that they can learn their way around your code base and start getting their hands dirty as quickly as possible? How do you get the new guy useful without bogging your existing programmers down in too much mentoring? Any comments & suggestions you have are much appreciated!

    Read the article

  • spam and dirty words comment post filtering in python (django)

    - by sintaloo
    Hi All, My basic question is how to filter spam and dirty words in a comment post system under python (django). I have a collection of phrases (approximately 3000 phrases) to be filtered. Question (1), are there any existing open source python (or django) package/module/plugin which can handle this job? I knew there was one called Akismet. But from what I understood, it will not solve my problem. Akismet is just a web service and filter the words dictionary defined by Akismet. But I have my own collection of words. Please correct me if I am wrong. Question (2), If there is no such open source package I can use, how to create my own one? The only thing I can think of it's to use regular expression and join all the word phrases with 'or' in a regular expression. but I have 3000 phrases, I think it won't work in term of performance and filter every comment post. any suggestions where should I start from? Thank you very much for your help and time.

    Read the article

  • javascript pausing consistently. How do I find what is causing it to pause?

    - by pedalpete
    I've got a fairly ajax heavy site and I'm trying to tune the performance. I have a function that runs between 20 & 200 times, depending on the user. I'm outputting the time the function takes to execute via console.time in firefox. The function takes about 4-6ms to complete. The strange thing is that on my larger test with 200 or runs through that function, it runs through the first 31, then seems to pause for almost a second before completing the last 170 or so. However, that 'pause' doesn't show up in the console.time logs, and I'm not running any other functions, and the object that gets passed to the function looks the same as all other objects that get passed in. The function is called like this for (var s in thisGroup.events){ showEvent(thisGroup.events[s]) } so, I don't see how or why it would suddenly pause near the beginning. but only pause once and then continue through. The pause ALWAYS happens on the 31st time through the function. I've taken a close look at the 'thisGroup.events[s]' that it is being run through, and it looks like this for #31 "eventId":"5106", "sid":"68", "gid":"29", "uid":"70","type":"event", "startDate":"2010-03-22","startTime":"6:00 PM","endDate":"2010-03-22","endTime":"11:00 PM","durationLength":"5", "durationTime":"5:00", "note":"", "desc":"event" The event immediately after the pause, #32 looks like this "eventId":"5111", "sid":"68", "gid":"29", "uid":"71","type":"event", "startDate":"2010-03-22","startTime":"6:00 PM","endDate":"2010-03-22","endTime":"11:00 PM","durationLength":"5", "durationTime":"5:00", "note":"", "desc":"event" another event that runs through no problem looks like this "eventId":"5113", "sid":"68", "gid":"29", "uid":"72","type":"event", "startDate":"2010-03-22","startTime":"4:30 PM","endDate":"2010-03-22","endTime":"11:00 PM","durationLength":"6.5", "durationTime":"6:30", "note":"", "desc":"event" From the console outputs, it doesn't appear as there is anything hanging or taking up time in the function itself, as the console.time for each event including #31,32 is 4ms. Another strange thing here is that the total time running the for loop across the entire object is coming out as 1014ms which is right for 200 events at 4-6ms each. Any suggestions on how to find this 'pause'? I find it very interesting that it is consistently happening between #31 & #32 only!

    Read the article

  • Threading Practice with Polling.

    - by Stacey
    I have a C# application that has to constantly read from a program; sometimes there is a chance it will not find what it needs, which will throw an exception. This is a limitation of the program it has to read from. This frequently causes the program to lock up as it tries to poll. So I solved it by spawning the 'polling' off into a separate thread. However watching the debugger, the thread is created and destroyed each time. I am uncertain if this is typical or not; but my question is, is this good practice, or am I using the threading for the wrong purpose? ProgramReader { static Thread oThread; public static void Read( Program program ) { // check to see if the program exists if ( false ) oThread = new ThreadStart(program.Poll); if(oThread != null || !oThread.IsAlive ) oThread.Start(); } } This is my general pseudocode. It runs every 10 seconds or so. Is this a huge hit to performance? The operation it performs is relatively small and lightweight; just repetitive.

    Read the article

  • randomize array using php

    - by Suneth Kalhara
    I need to rendomise the order of follow array using PHP, i tried to use array shuffle and array_random but no luck, can anyone help me please Array ( [0] => Array ( [value] => 4 [label] => GasGas ) [1] => Array ( [value] => 3 [label] => Airoh Helmets ) [2] => Array ( [value] => 12 [label] => XCiting Trials Wear ) [3] => Array ( [value] => 11 [label] => Hebo Trials ) [4] => Array ( [value] => 10 [label] => Jitsie Products ) [5] => Array ( [value] => 9 [label] => Diadora Boots ) [6] => Array ( [value] => 8 [label] => S3 Performance ) [7] => Array ( [value] => 7 [label] => Scorpa ) [8] => Array ( [value] => 6 [label] => Inspired ) [9] => Array ( [value] => 5 [label] => Oset ) )

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • MKL Accelerated Math Libraries for Java...

    - by Kaopua
    I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming. I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here). In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java? Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math). I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven. Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stock so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.

    Read the article

  • Disposing underlying object from finalizer in an immutable object

    - by Juan Luis Soldi
    I'm trying to wrap around Awesomium and make it look to the rest of my code as close as possible to NET's WebBrowser since this is for an existing application that already uses the WebBrowser. In this library, there is a class called JSObject which represents a javascript object. You can get one of this, for instance, by calling the ExecuteJavascriptWithResult method of the WebView class. If you'd call it like myWebView.ExecuteJavascriptWithResult("document", string.Empty).ToObject(), then you'd get a JSObject that represents the document. I'm writing an immutable class (it's only field is a readonly JSObject object) called JSObjectWrap that wraps around JSObject which I want to use as base class for other classes that would emulate .NET classes such as HtmlElement and HtmlDocument. Now, these classes don't implement Dispose, but JSObject does. What I first thought was to call the underlying JSObject's Dispose method in my JSObjectWrap's finalizer (instead of having JSObjectWrap implement Dispose) so that the rest of my code can stay the way it is (instead of having to add using's everywhere and make sure every JSObjectWrap is being properly disposed). But I just realized if more than two JSObjectWrap's have the same underlying JSObject and one of them gets finalized this will mess up the other JSObjectWrap. So now I'm thinking maybe I should keep a static Dictionary of JSObjects and keep count of how many of each of them are being referenced by a JSObjectWrap but this sounds messy and I think could cause major performance issues. Since this sounds to me like a common pattern I wonder if anyone else has a better idea.

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >