Search Results

Search found 8219 results on 329 pages for 'less'.

Page 292/329 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • CSS Child selectors in IE7 tables

    - by John
    I'm trying to use the CSS child selector in IE7, and it doesn't seem to work. I have nested tables. My outer table has a class name "mytable", and I want the td's of the outer table to show borders. I don't want the inner table td's to have borders. I think I should be able to have CSS that looks like this: .mytable { border-style: solid } .mytable>tr>td { border-style: solid } But the second line seems to have no effect. If I change the second line to make it less specific, it applies to all the td's - I see too many borders. td { border-style: solid } So I think it really is just an issue with the selectors. Pages like this suggest that IE7 should be able to do what I want. Am I doing something silly? Here's the whole HTML file: <html> <head> <style type="text/css"> .mytable { border-style: solid; border-collapse: collapse;} td { border-style: solid; } </style> </head> <body> <table class="mytable"> <tr> <td>Outer top-left</td> <td>Outer top-right</td> </tr> <tr> <td>Outer bottom-left</td> <td> <table> <tr> <td>Inner top-left</td> <td>Inner top-right</td> </tr> <tr> <td>Inner bottom-left</td> <td>Inner bottom-right</td> </tr> <table> </td> </tr> <table> </body> </html>

    Read the article

  • Help needed with AES between Java and Objective-C (iPhone)....

    - by Simon Lee
    I am encrypting a string in objective-c and also encrypting the same string in Java using AES and am seeing some strange issues. The first part of the result matches up to a certain point but then it is different, hence when i go to decode the result from Java onto the iPhone it cant decrypt it. I am using a source string of "Now then and what is this nonsense all about. Do you know?" Using a key of "1234567890123456" The objective-c code to encrypt is the following: NOTE: it is a NSData category so assume that the method is called on an NSData object so 'self' contains the byte data to encrypt. - (NSData *)AESEncryptWithKey:(NSString *)key { char keyPtr[kCCKeySizeAES128+1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [self length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void *buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES128, NULL /* initialization vector (optional) */, [self bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } And the java encryption code is... public byte[] encryptData(byte[] data, String key) { byte[] encrypted = null; Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); byte[] keyBytes = key.getBytes(); SecretKeySpec keySpec = new SecretKeySpec(keyBytes, "AES"); try { Cipher cipher = Cipher.getInstance("AES/ECB/PKCS7Padding", "BC"); cipher.init(Cipher.ENCRYPT_MODE, keySpec); encrypted = new byte[cipher.getOutputSize(data.length)]; int ctLength = cipher.update(data, 0, data.length, encrypted, 0); ctLength += cipher.doFinal(encrypted, ctLength); } catch (Exception e) { logger.log(Level.SEVERE, e.getMessage()); } finally { return encrypted; } } The hex output of the objective-c code is - 7a68ea36 8288c73d f7c45d8d 22432577 9693920a 4fae38b2 2e4bdcef 9aeb8afe 69394f3e 1eb62fa7 74da2b5c 8d7b3c89 a295d306 f1f90349 6899ac34 63a6efa0 and the java output is - 7a68ea36 8288c73d f7c45d8d 22432577 e66b32f9 772b6679 d7c0cb69 037b8740 883f8211 748229f4 723984beb 50b5aea1 f17594c9 fad2d05e e0926805 572156d As you can see everything is fine up to - 7a68ea36 8288c73d f7c45d8d 22432577 I am guessing I have some of the settings different but can't work out what, I tried changing between ECB and CBC on the java side and it had no effect. Can anyone help!? please....

    Read the article

  • Really "wow" them in the interview

    - by Juliet
    Let me put it to you this way: I'm a top-notch programmer, but a notoriously bad interviewee. I've flunked 3 interviews consecutively because I get so nervous that my voice tightens at least 2 octaves higher and I start visibly shaking -- mind you, I can handle whatever technical questions the interviewer throws at me in that state, but I think it looks bad to come off as a quivering, squeaky-voiced young woman during a job interview. I've just got the personality type of a shy computer programmer. No matter how technical I am, I'm going to get passed up in favor of a smooth talker. I have another interview coming up shortly, and I want to really impress the company. Here are my trouble spots: What can I do to be less nervous during my interview? I always get really excited when I hear I have a face-to-face interview, but get more and more anxious as D-Day the interview approaches. My employers wants me to explain what I used to do at my prior employment. I'm a very chatty person and tend to talk/squeak for 10 minutes at a time. How long or short should I time my answers? On that note, when I'm explaining what I did at prior jobs, what exactly is my interviewer looking for? At some point, my interviewer will ask "do you have any questions for me while you're here?" I should, but what kinds of questions should I ask to show that I'm interested in being employed? My interviewer always asks why I'm looking for a new job. The real reason is that my present salary is $27K/yr [Edit to add: and I've yet to get a raise since I started], and I want to make more money -- otherwise the work environment is fine. How do I sugarcoat "I want to make more money" into something that sounds nicer? I have only one prior programmer job, and I've worked there for 18 months, but I have the skill of someone with 4 to 6 years of experience. What can I say to compete against applicants with more work experience? I took a low-paying $27K/yr programming job just to get my foot in IT, and I've been trying to leverage that job as a stepping stone to better opportunities. I get interviews because I consistently out-score senior-level developers in aptitude tests, and my desired salary range is right in the ballpark of what most companies want to offer. Unfortunately, while I've been a programming as a hobby for 10 years and I'm geared to graduate with my BA in Comp Sci in May '09, employers see me as a junior-level programmer with no degree. I want to prove them wrong and get a job that matches my skill level. I'd appreciate any advice anyone has to offer, especially if they can help me get a better job in the process.

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • agent-based simulation: performance issue: Python vs NetLogo & Repast

    - by max
    I'm replicating a small piece of Sugarscape agent simulation model in Python 3. I found the performance of my code is ~3 times slower than that of NetLogo. Is it likely the problem with my code, or can it be the inherent limitation of Python? Obviously, this is just a fragment of the code, but that's where Python spends two-thirds of the run-time. I hope if I wrote something really inefficient it might show up in this fragment: UP = (0, -1) RIGHT = (1, 0) DOWN = (0, 1) LEFT = (-1, 0) all_directions = [UP, DOWN, RIGHT, LEFT] # point is just a tuple (x, y) def look_around(self): max_sugar_point = self.point max_sugar = self.world.sugar_map[self.point].level min_range = 0 random.shuffle(self.all_directions) for r in range(1, self.vision+1): for d in self.all_directions: p = ((self.point[0] + r * d[0]) % self.world.surface.length, (self.point[1] + r * d[1]) % self.world.surface.height) if self.world.occupied(p): # checks if p is in a lookup table (dict) continue if self.world.sugar_map[p].level > max_sugar: max_sugar = self.world.sugar_map[p].level max_sugar_point = p if max_sugar_point is not self.point: self.move(max_sugar_point) Roughly equivalent code in NetLogo (this fragment does a bit more than the Python function above): ; -- The SugarScape growth and motion procedures. -- to M ; Motion rule (page 25) locals [ps p v d] set ps (patches at-points neighborhood) with [count turtles-here = 0] if (count ps > 0) [ set v psugar-of max-one-of ps [psugar] ; v is max sugar w/in vision set ps ps with [psugar = v] ; ps is legal sites w/ v sugar set d distance min-one-of ps [distance myself] ; d is min dist from me to ps agents set p random-one-of ps with [distance myself = d] ; p is one of the min dist patches if (psugar >= v and includeMyPatch?) [set p patch-here] setxy pxcor-of p pycor-of p ; jump to p set sugar sugar + psugar-of p ; consume its sugar ask p [setpsugar 0] ; .. setting its sugar to 0 ] set sugar sugar - metabolism ; eat sugar (metabolism) set age age + 1 end On my computer, the Python code takes 15.5 sec to run 1000 steps; on the same laptop, the NetLogo simulation running in Java inside the browser finishes 1000 steps in less than 6 sec. EDIT: Just checked Repast, using Java implementation. And it's also about the same as NetLogo at 5.4 sec. Recent comparisons between Java and Python suggest no advantage to Java, so I guess it's just my code that's to blame? EDIT: I understand MASON is supposed to be even faster than Repast, and yet it still runs Java in the end.

    Read the article

  • Correlated SQL Join Query from multiple tables

    - by SooDesuNe
    I have two tables like the ones below. I need to find what exchangeRate was in effect at the dateOfPurchase. I've tried some correlated sub queries, but I'm having difficulty getting the correlated record to be used in the sub queries. I expect a solution will need to follow this basic outline: SELECT only the exchangeRates for the applicable countryCode From 1. SELECT the newest exchangeRate less than the dateOfPurchase Fill in the query table with all the fields from 2. and the purchasesTable. My Tables: purchasesTable: > dateOfPurchase | costOfPurchase | countryOfPurchase > 29-March-2010 | 20.00 | EUR > 29-March-2010 | 3000 | JPN > 30-March-2010 | 50.00 | EUR > 30-March-2010 | 3000 | JPN > 30-March-2010 | 2000 | JPN > 31-March-2010 | 100.00 | EUR > 31-March-2010 | 125.00 | EUR > 31-March-2010 | 2000 | JPN > 31-March-2010 | 2400 | JPN costOfPurchase is in whatever the local currency is for a given countryCode exchangeRateTable > effectiveDate | countryCode | exchangeRate > 29-March-2010 | JPN | 90 > 29-March-2010 | EUR | 1.75 > 30-March-2010 | JPN | 92 > 31-March-2010 | JPN | 91 The results of the query that I'm looking for: > dateOfPurchase | costOfPurchase | countryOfPurchase | exchangeRate > 29-March-2010 | 20.00 | EUR | 1.75 > 29-March-2010 | 3000 | JPN | 90 > 30-March-2010 | 50.00 | EUR | 1.75 > 30-March-2010 | 3000 | JPN | 92 > 30-March-2010 | 2000 | JPN | 92 > 31-March-2010 | 100.00 | EUR | 1.75 > 31-March-2010 | 125.00 | EUR | 1.75 > 31-March-2010 | 2000 | JPN | 91 > 31-March-2010 | 2400 | JPN | 91 So for example in the results, the exchange rate, in effect for EUR on 31-March was 1.75. I'm using Access, but a MySQL answer would be fine too. UPDATE: Modification to Allan's answer: SELECT dateOfPurchase, costOfPurchase, countryOfPurchase, exchangeRate FROM purchasesTable p LEFT OUTER JOIN (SELECT e1.exchangeRate, e1.countryCode, e1.effectiveDate, min(e2.effectiveDate) AS enddate FROM exchangeRateTable e1 LEFT OUTER JOIN exchangeRateTable e2 ON e1.effectiveDate < e2.effectiveDate AND e1.countryCode = e2.countryCode GROUP BY e1.exchangeRate, e1.countryCode, e1.effectiveDate) e ON p.dateOfPurchase >= e.effectiveDate AND (p.dateOfPurchase < e.enddate OR e.enddate is null) AND p.countryOfPurchase = e.countryCode I had to make a couple small changes.

    Read the article

  • Most efficient method to query a Young Tableau

    - by Matthieu M.
    A Young Tableau is a 2D matrix A of dimensions M*N such that: i,j in [0,M)x[0,N): for each p in (i,M), A[i,j] <= A[p,j] for each q in (j,N), A[i,j] <= A[i,q] That is, it's sorted row-wise and column-wise. Since it may contain less than M*N numbers, the bottom-right values might be represented either as missing or using (in algorithm theory) infinity to denote their absence. Now the (elementary) question: how to check if a given number is contained in the Young Tableau ? Well, it's trivial to produce an algorithm in O(M*N) time of course, but what's interesting is that it is very easy to provide an algorithm in O(M+N) time: Bottom-Left search: Let x be the number we look for, initialize i,j as M-1, 0 (bottom left corner) If x == A[i,j], return true If x < A[i,j], then if i is 0, return false else decrement i and go to 2. Else, if j is N-1, return false else increment j This algorithm does not make more than M+N moves. The correctness is left as an exercise. It is possible though to obtain a better asymptotic runtime. Pivot Search: Let x be the number we look for, initialize i,j as floor(M/2), floor(N/2) If x == A[i,j], return true If x < A[i,j], search (recursively) in A[0:i-1, 0:j-1], A[i:M-1, 0:j-1] and A[0:i-1, j:N-1] Else search (recursively) in A[i+1:M-1, 0:j], A[i+1:M-1, j+1:N-1] and A[0:i, j+1:N-1] This algorithm proceed by discarding one of the 4 quadrants at each iteration and running recursively on the 3 left (divide and conquer), the master theorem yields a complexity of O((N+M)**(log 3 / log 4)) which is better asymptotically. However, this is only a big-O estimation... So, here are the questions: Do you know (or can think of) an algorithm with a better asymptotical runtime ? Like introsort prove, sometimes it's worth switching algorithms depending on the input size or input topology... do you think it would be possible here ? For 2., I am notably thinking that for small size inputs, the bottom-left search should be faster because of its O(1) space requirement / lower constant term.

    Read the article

  • SSIS DTS Package flat file error - "The file name specified in the connection was not valid"

    - by MisterZimbu
    I have a pretty basic SSIS package that is attempting to read a file hosted on a share, and import its contents to a database table. The package runs fine when I run it manually within SSIS. However, when I set up a SQL Agent job and attempt to execute it, I get the following error: Executed as user: DOMAIN\UserName. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:14:17 AM Error: 2010-05-03 10:14:17.75 Code: 0xC001401E Source: DataImport Connection manager "Data File Local" Description: The file name "\10.1.1.159\llpf\datafile.dat" specified in the connection was not valid. End Error Error: 2010-05-03 10:14:17.75 Code: 0xC001401D Source: DataAnimalImport Description: Connection "Data File Local" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:14:17 AM Finished: 10:14:17 AM Elapsed: 0.594 seconds. The package execution failed. The step failed. This leads me to believe it's a permissions issue, but every attempt I've made to fix it has failed. What I've tried so far: Run as the SQL Agent account (DOMAIN\SqlAgent) - yields same error. DOMAIN\SqlAgent has "Full Control" permissions on both the share and the uploaded file. Set up a proxy account with a different account's credentials (DOMAIN\Account) - yields same error. Like above, "Full Control" permissions were given over the share to that account. Gave "Everyone" full control permissions over the share (temporarily!). Yielded same error. Manually copied the file to a local path and tested with the SQL Agent account. Worked properly. Added an ActiveX script task that would first copy the remotely hosted file to a local path, and then have the DTS package reference the local file. Gave a completely nondescriptive (even by SSIS standards) error when trying to run the script. Set up a proxy account, using my own personal account's credentials - worked correctly. However, this is not an acceptable solution as there are password policies in place on my account, as well as being a bad practice to set things up this way in general. Any ideas? I'm still convinced it's a permissions issue. However, what I've read from various searches more or less says giving the executing account permissions on the share should work. However, this is not the case here (unless I'm missing something obscure when I'm setting up permissions on the share).

    Read the article

  • Perl regex matching output from `w -hs` command

    - by Bushman
    I'm trying to write a Perl script that will work better with KDE's kwrited, which, as far as I can tell, is connected to a pts and puts every line it receives through the KDE system tray notifications, with the title "KDE write daemon". Unfortunately, it makes a separate notification for each and every line, so it spams up the system tray with multiline messages on regular old write, and for some reason it cuts off the entire last line of the message when using wall (One-line messages are also goners.). I was also hoping to make it so that it could broadcast across a LAN with thick clients. Before starting on that (which would require ssh, of course), I tried to make an ssh-less version to make sure it works. Unfortunately, it doesn't. perl ./write.pl "Testing 1 2 3" where the following is the contents of ./write.pl: #!/usr/bin/perl use strict; use warnings; my $message = ""; my $device = ""; my $possibledevice = '`w -hs | grep "/usr/bin/kwrited"`'; #Where is kwrited? $possibledevice =~ s/^[^\t][\t]//; $possibledevice =~ s/[\t][^\t][\t ]\/usr\/bin\/kwrited$//; $possibledevice = '/dev/'.$possibledevice; unless ($possibledevice eq "") { $device = $possibledevice; } if ($ARGV[0] ne "") { $message = $ARGV[0]; $device = $ARGV[1]; } else { $device = $ARGV[0] unless $ARGV[0] eq ""; while (<STDIN>) { chomp; $message .= <STDIN>; } } if ($message ne "") { system "echo \'$message\' > $device"; } else { print "Error: empty message" } produces the following error: $ perl write.pl "Testing 1 2 3" Use of uninitialized value $device in concatenation (.) or string at write.pl line 29. sh: -c: line 0: syntax error near unexpected token `newline' sh: -c: line 0: `echo 'foo' > ' Somehow, the regular expressions and/or the backtick escape in processing $possibledevice are not working properly, because where kwrited is connected to /dev/pts/0, the following works perfectly: $ perl write.pl "Testing 1 2 3" /dev/pts/0

    Read the article

  • Need help with Xpath methods in javascript (selectSingleNode, selectNodes)

    - by Andrija
    I want to optimize my javascript but I ran into a bit of trouble. I'm using XSLT transformation and the basic idea is to get a part of the XML and subquery it so the calls are faster and less expensive. This is a part of the XML: <suite> <table id="spis" runat="client"> <rows> <row id="spis_1"> <dispatch>'2008', '288627'</dispatch> <data col="urGod"> <title>2008</title> <description>Ur. god.</description> </data> <data col="rbr"> <title>288627</title> <description>Rbr.</description> </data> ... </rows> </table> </suite> In the page, this is the javascript that works with this: // this is my global variable for getting the elements so I just get the most // I can in one call elemCollection = iDom3.Table.all["spis"].XML.DOM.selectNodes("/suite/table/rows/row").context; //then I have the method that uses this by getting the subresults from elemCollection //rest of the method isn't interesting, only the selectNodes call _buildResults = function (){ var _RowList = elemCollection.selectNodes("/data[@col = 'urGod']/title"); var tmpResult = ['']; var substringResult=""; for (i=0; i<_RowList.length; i++) { tmpResult.push(_RowList[i].text,iDom3.Global.Delimiter); } ... //this variant works elemCollection = iDom3.Table.all["spis"].XML.DOM _buildResults = function (){ var _RowList = elemCollection.selectNodes("/suite/table/rows/row/data[@col = 'urGod']/title"); var tmpResult = ['']; var substringResult=""; for (i=0; i<_RowList.length; i++) { tmpResult.push(_RowList[i].text,iDom3.Global.Delimiter); } ... The problem is, I can't find a way to use the subresults to get what I need.

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Am I "wasting" my time learning C and other low level stuff ?

    - by Andreas Grech
    I have just recently started learning C and the reason I did that was because frankly, I consider myself to be of a "less-developer" than the people who know and work with C. Thus I planned to start learning ASM, C, C++ and bought the K&R book and started pushing myself to learn the C Programming Language and up till now I'm doing great...learning about arrays the low level way (ie the pointer + offset thing), pointers and all that and obviously asking questions on stackoverflow for guidance. My problem is that sometimes I get thinking if instead of learning this low level stuff, maybe I should maybe spend more time learning newer, more widely used technologies...basically, more web stuff. Now I am well versed with both C# and ASP.Net and currently that's what I do for a living, but still there exists Microsoft technologies that I haven't quite touched upon...such as ASP.Net MVC, The Entity Framework etc... And those are only Microsoft Technologies...obviously there are other stuff that I would like to touch upon...stuff like Ruby, which would lead me to Ruby on Rails, or Python for Django or even Java and J2EE, or maybe even PHP; ie, basically mainly Web Stuff. Mind you, I did touch upon some of the stuff I mentioned earlier on, such as PHP and Java but I am still not quite versed in them as I am in C# and ASP.Net...but still, I think that by learning other languages that are used in the web environment will broaden my horizons...both as a developer who loves learning, and also Career wise. My point is, am I really using up my time correctly by learning older, lower level stuff? Stuff that for my current line of work, will most probably never use, but still is interesting to know ? To be frankly honest, I am also learning C so that I could, maybe someday, get into Electronics and Micro-controller programming but that is a whole new world for me and, if I choose to go there, will take some time to get adjusted to. And even then, I don't know if I can get a career in working in that line of work. ...but I still wonder about this question over and over...Am I doing the right thing by learning C instead of something (Web-stuff) that will most probably be more useful for me career-wise? I'm sorry for such asking such a long and most probably a boring question, but I feel as if this is the only place where I can ask such a question and get an honest answer from experts in the field. Thank you for your time.

    Read the article

  • Resizing an image using mouse dragging (C#)

    - by Gaax
    Hi all. I'm having some trouble resizing an image just by dragging the mouse. I found an average resize method and now am trying to modify it to use the mouse instead of given values. The way I'm doing it makes sense to me but maybe you guys can give me some better ideas. I'm basically using the distance between the current location of the mouse and the previous location of the mouse as the scaling factor. If the distance between the current mouse location and the center of of the image is less than the distance between previous mouse location and the center of the image then the image gets smaller, and vice-versa. With the code below I'm getting an Argument Exception (invalid parameter) when creating the new bitmap with the new height and width and I really don't understand why... any ideas? private static Image resizeImage(Image imgToResize, System.Drawing.Point prevMouseLoc, System.Drawing.Point currentMouseLoc) { int sourceWidth = imgToResize.Width; int sourceHeight = imgToResize.Height; float dCurrCent = 0; //Distance between current mouse location and the center of the image float dPrevCent = 0; //Distance between previous mouse location and the center of the image float dCurrPrev = 0; //Distance between current mouse location and the previous mouse location int sign = 1; System.Drawing.Point imgCenter = new System.Drawing.Point(); float nPercent = 0; imgCenter.X = imgToResize.Width / 2; imgCenter.Y = imgToResize.Height / 2; // Calculating the distance between the current mouse location and the center of the image dCurrCent = (float)Math.Sqrt(Math.Pow(currentMouseLoc.X - imgCenter.X, 2) + Math.Pow(currentMouseLoc.Y - imgCenter.Y, 2)); // Calculating the distance between the previous mouse location and the center of the image dPrevCent = (float)Math.Sqrt(Math.Pow(prevMouseLoc.XimgCenter.X,2) + Math.Pow(prevMouseLoc.Y - imgCenter.Y, 2)); // Calculating the sign value if (dCurrCent >= dPrevCent) { sign = 1; } else { sign = -1; } nPercent = sign * (float)Math.Sqrt(Math.Pow(currentMouseLoc.X - prevMouseLoc.X, 2) + Math.Pow(currentMouseLoc.Y - prevMouseLoc.Y, 2)); int destWidth = (int)(sourceWidth * nPercent); int destHeight = (int)(sourceHeight * nPercent); Bitmap b = new Bitmap(destWidth, destHeight); // exception thrown here Graphics g = Graphics.FromImage((Image)b); g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(imgToResize, 0, 0, destWidth, destHeight); g.Dispose(); return (Image)b; }

    Read the article

  • Using Hibernate to do a query involving two tables

    - by Nathan Spears
    I'm inexperienced with sql in general, so using Hibernate is like looking for an answer before I know exactly what the question is. Please feel free to correct any misunderstandings I have. I am on a project where I have to use Hibernate. Most of what I am doing is pretty basic and I could copy and modify. Now I would like to do something different and I'm not sure how configuration and syntax need to come together. Let's say I have two tables. Table A has two (relevant) columns, user GUID and manager GUID. Obviously managers can have more than one user under them, so queries on manager can return more than one row. Additionally, a manager can be managing the same user on multiple projects, so the same user can be returned multiple times for the same manager query. Table B has two columns, user GUID and user full name. One-to-one mapping there. I want to do a query on manager GUID from Table A, group them by unique User GUID (so the same User isn't in the results twice), then return those users' full names from Table B. I could do this in sql without too much trouble but I want to use Hibernate so I don't have to parse the sql results by hand. That's one of the points of using Hibernate, isn't it? Right now I have Hibernate mappings that map each column in Table A to a field (well the get/set methods I guess) in a DAO object that I wrote just to hold that Table's data. I could also use the Hibernate DAOs I have to access each table separately and do each of the things I mentioned above in separate steps, but that would be less efficient (I assume) that doing one query. I wrote a Service object to hold the data that gets returned from the query (my example is simplified - I'm going to keep some other data from Table A and get multiple columns from Table B) but I'm at a loss for how to write a DAO that can do the join, or use the DAOs I have to do the join. FYI, here is a sample of my hibernate config file (simplified to match my example): <hibernate-mapping package="com.my.dao"> <class name="TableA" table="table_a"> <id name="pkIndex" column="pk_index" /> <property name="userGuid" column="user_guid" /> <property name="managerGuid" column="manager_guid" /> </class> </hibernate-mapping> So then I have a DAOImplementation class that does queries and returns lists like public List<TableA> findByHQL(String hql, Map<String, String> params) etc. I'm not sure how "best practice" that is either.

    Read the article

  • How to use objetcs as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance.

    Read the article

  • Register all GUI components as Observers or pass current object to next object as a constructor argu

    - by Jack
    First, I'd like to say that I think this is a common issue and there may be a simple or common solution that I am unaware of. Many have probably encountered a similar problem. Thanks for reading. I am creating a GUI where each component needs to communicate (or at least be updated) by multiple other components. Currently, I'm using a Singleton class to accomplish this goal. Each GUI component gets the instance of the singleton and registers itself. When updates need to be made, the singleton can call public methods in the registered class. I think this is similar to an Observer pattern, but the singleton has more control. Currently, the program is set up something like this: class c1 { CommClass cc; c1() { cc = CommClass.getCommClass(); cc.registerC1( this ); C2 c2 = new c2(); } } class c2 { CommClass cc; c2() { cc = CommClass.getCommClass(); cc.registerC2( this ); C3 c3 = new c3(); } } class c3 { CommClass cc; c3() { cc = CommClass.getCommClass(); cc.registerC3( this ); C4 c4 = new c4(); } } etc. Unfortunately, the singleton class keeps growing larger as more communication is required between the components. I was wondering if it's a good idea to instead of using this singleton, pass the higher order GUI components as arguments in the constructors of each GUI component: class c1 { c1() { C2 c2 = new c2( this ); } } class c2 { C1 c1; c2( C1 c1 ) { this.c1 = c1 C3 c3 = new c3( c1, this ); } } class c3 { C1 c1; C2 c2; c3( C1 c1, C2 c2 ) { this.c1 = c1; this.c2 = c2; C4 c4 = new c4( c1, c2, this ); } } etc. The second version relies less on the CommClass, but it's still very messy as the private member variables increase in number and the constructors grow in length. Each class contains GUI components that need to communicate through CommClass, but I can't think of a good way to do it. If this seems strange or horribly inefficient, please describe some method of communication between classes that will continue to work as the project grows. Also, if this doesn't make any sense to anyone, I'll try to give actual code snippets in the future and think of a better way to ask the question. Thanks.

    Read the article

  • Which mobile operating system should I code for?

    - by samgoody
    It seems as though mobile computing has fully arrived. I would like to rewrite two of our programs for mobile devices, but am a bit lost as to which platform to target. Complicating this decision: I would need to learn the relevant languages and IDEs - my coding to date has been almost all web based (PHP, JS, Actionscript, etc. Some ASPX). Most users seem to be religious about their mobile decision, so oral conversations leave me more confused then enlightened. I do not yet own a smartphone - will have to buy one once I know which platform to be aiming for. Both of my programs are more for business users, (one is only useful for C.P.A.s). I am a single developer, and cannot develop for more than one platform at a time. Getting it right is important. Based on what I've found on the web, I would've expected RIM to be a shoo-in, and the general order to be as follows: RIM Blackberry - More of them than any other brand. Despite naysayers, they've had double the sales (or perhaps 5X the sales) of any other smartphone, and have continued to grow. And, they have business users. Android - According to Schmidt, they have outsold everyone else except RIM (though I can't find where I read that now), and they are just getting started. According to Comscore, they are already at 8% of the market and expected to hit Shcmidt's claims within six months. Nokia - The largest worldwide. If they would just make up between Maemo or Symbian, I would be far less confused. iPhone - Much more competition by other apps, fewer sales to be had, and a overlord that can delay or cancel my app at any time. Is Cocoa hard to learn? Windows Mobile - Word is that version 7 will not be backwards compatible and losing market share. Palm WebOS - Perhaps this should go first, as it is the only one that offers tools to make my life easy as a web application developer. No competition in marketplace. But not very many users either. However, a search on StackOverflow shows a hugely disproportionate number of iPhone questions versus Blackberry. Likewise, there are clearly more apps on iPhone, so it must be getting developer love. What is the one platform I should develop for? Please back up your answer with the logic.

    Read the article

  • How can a test script inform R CMD check that it should emit a custom message?

    - by mariotomo
    I'm writing a R package (delftfews) here at office. we are using svUnit for unit testing. our process for describing new functionality: we define new unit tests, initially marked as DEACTIVATED; one block of tests at a time we activate them and implement the function described by the tests. almost all the time we have a small amount of DEACTIVATED tests, relative to functions that might be dropped or will be implemented. my problem/question is: can I alter the doSvUnit.R so that R CMD check pkg emits a NOTE (i.e. a custom message "NOTE" instead of "OK") in case there are DEACTIVATED tests? as of now, we see only that the active tests don't give error: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ OK * checking PDF version of manual ... OK which is all right if all tests succeed, but less all right if there are skipped tests and definitely wrong if there are failing tests. In this case, I'd actually like to see a NOTE or a WARNING like the following: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE 6 test(s) were skipped. WARNING 1 test(s) are failing. * checking PDF version of manual ... OK As of now, we have to open the doSvUnit.Rout to check the real test results. I contacted two of the maintainers at r-forge and CRAN and they pointed me to the sources of R, in particular the testing.R script. if I understand it correctly, to answer this question we need patching the tools package: scripts in the tests directory are called using a system call, output (stdout and stderr) go to one single file, there are two possible outcomes: ok or not ok, so I opened a change request on R, proposing something like bit-coding the return status, bit-0 for ERROR (as it is now), bit-1 for WARNING, bit-2 for NOTE. with my modification, it would be easy producing this output: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE - please check doSvUnit.Rout. WARNING - please check doSvUnit.Rout. * checking PDF version of manual ... OK Brian Ripley replied "There are however several packages with properly written unit tests that do signal as required. Please do take this discussion elsewhere: R-bugs is not the place to ask questions." and closed the change request. anybody has hints?

    Read the article

  • How can I get image data from QTKit without color or gamma correction in Snow Leopard?

    - by Nick Haddad
    Since Snow Leopard, QTKit is now returning color corrected image data from functions like QTMovies frameImageAtTime:withAttributes:error:. Given an uncompressed AVI file, the same image data is displayed with larger pixel values in Snow Leopard vs. Leopard. Currently I'm using frameImageAtTime to get an NSImage, then ask for the tiffRepresentation of that image. After doing this, pixel values are slightly higher in Snow Leopard. For example, a file with the following pixel value in Leopard: [0 180 0] Now has a pixel value like: [0 192 0] Is there any way to ask a QTMovie for video frames that are not color corrected? Should I be asking for a CGImageRef, CIImage, or CVPixelBufferRef instead? Is there a way to disable color correction altogether prior to reading in the video files? I've attempted to work around this issue by drawing into a NSBitmapImageRep with the NSCalibratedColroSpace, but that only gets my part of the way there: // Create a movie NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys : nsFileName, QTMovieFileNameAttribute, [NSNumber numberWithBool:NO], QTMovieOpenAsyncOKAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsBackAndForthAttribute, (id)nil]; _theMovie = [[QTMovie alloc] initWithAttributes:dict error:&error]; // .... NSMutableDictionary *imageAttributes = [NSMutableDictionary dictionary]; [imageAttributes setObject:QTMovieFrameImageTypeNSImage forKey:QTMovieFrameImageType]; [imageAttributes setObject:[NSArray arrayWithObject:@"NSBitmapImageRep"] forKey: QTMovieFrameImageRepresentationsType]; [imageAttributes setObject:[NSNumber numberWithBool:YES] forKey:QTMovieFrameImageHighQuality]; NSError* err = nil; NSImage* image = (NSImage*)[_theMovie frameImageAtTime:frameTime withAttributes:imageAttributes error:&err]; // copy NSImage into an NSBitmapImageRep (Objective-C) NSBitmapImageRep* bitmap = [[image representations] objectAtIndex:0]; // Draw into a colorspace we know about NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:getWidth() pixelsHigh:getHeight() bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSCalibratedRGBColorSpace bitmapFormat:0 bytesPerRow:(getWidth() * 4) bitsPerPixel:32]; [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]]; [bitmap draw]; [NSGraphicsContext restoreGraphicsState]; This does convert back to a 'Non color corrected' colorspace, but the color values NOT are exactly the same as what is stored in the Uncompressed AVI files we are testing with. Also this is much less efficient because it is converting from RGB - "Device RGB" - RGB. Also, I am working in a 64-bit application, so dropping down to the Quicktime-C API is not an option. Thanks for your help.

    Read the article

  • Where to put a glossary of important terms and patterns in documentation?

    - by Tetha
    Greetings. I want to document certain patterns in the code in order to build up a consistent terminology (in order to easen communication about the software). I am, however, unsure, where to define the terms given. In order to get on the same level, an example: I have a code generator. This code generator receives a certain InputStructure from the Parser (yes, the name InputStructure might be less than ideal). This InputStructure is then transformed into various subsequent datastructures (like an abstract description of the validation process). Each of these datastructures can be either transformed into another value of the same datastructure or it can be transformed into the next datastructure. This should sound like Pipes and Filters to some degree. Given this, I call an operation which takes a datastructures and constructs a value of the same datastructure a transformation, while I call an operation which takes a datastructure and produces a different follow-up datastructure a derivation. The final step of deriving a string containing code is called emitting. (So, overall, the codegenerator takes the input-structure and transforms, transforms, derives, transforms, derives and finally emits). I think emphasizing these terms will be benefitial in communications, because then it is easy to talk about things. If you hear "transformation", you know "Ok, I only need to think about these two datastructures", if you hear "emitting", you know "Ok, I only need to know this datastructure and the target language.". However, where do I document these patterns? The current code base uses visitors and offers classes called like ValidatorTransformationBase<ResultType> (or InputStructureTransformationBase<ResultType>, and so one and so on). I do not really want to add the definition of such terms to the interfaces, because in that case, I'd have to repeat myself on each and every interface, which clearly violates DRY. I am considering to emphasize the distinction between Transformations and Derivations by adding further interfaces (I would have to think about a better name for the TransformationBase-classes, but then I could do thinks like ValidatorTransformation extends ValidatorTransformationBase<Validator>, or ValidatorDerivationFromInputStructure extends InputStructureTransformation<Validator>). I also think I should add a custom page to the doxygen documentation already existing, as in "Glossary" or "Architecture Principles", which contains such principles. The only disadvantage of this would be that a contributor will need to find this page in order to actually learn about this. Am I missing possibilities or am I judging something wrong here in your opinion? -- Regards, Tetha

    Read the article

  • Delay keyboard input help

    - by Stradigos
    I'm so close! I'm using the XNA Game State Management example found here and trying to modify how it handles input so I can delay the key/create an input buffer. In GameplayScreen.cs I've declared a double called elapsedTime and set it equal to 0. In the HandleInput method I've changed the Key.Right button press to: if (keyboardState.IsKeyDown(Keys.Left)) movement.X -= 50; if (keyboardState.IsKeyDown(Keys.Right)) { elapsedTime -= gameTime.ElapsedGameTime.TotalMilliseconds; if (elapsedTime <= 0) { movement.X += 50; elapsedTime = 10; } } else { elapsedTime = 0; } The pseudo code: If the right arrow key is not pressed set elapsedTime to 0. If it is pressed, the elapsedTime equals itself minus the milliseconds since the last frame. If the difference then equals 0 or less, move the object 50, and then set the elapsedTime to 10 (the delay). If the key is being held down elapsedTime should never be set to 0 via the else. Instead, after elapsedTime is set to 10 after a successful check, the elapsedTime should get lower and lower because it's being subtracted by the TotalMilliseconds. When that reaches 0, it successfully passes the check again and moves the object once more. The problem is, it moves the object once per press but doesn't work if you hold it down. Can anyone offer any sort of tip/example/bit of knowledge towards this? Thanks in advance, it's been driving me nuts. In theory I thought this would for sure work. CLARIFICATION Think of a grid when your thinking about how I want the block to move. Instead of just fluidly moving across the screen, it's moving by it's width (sorta jumping) to the next position. If I hold down the key, it races across the screen. I want to slow this whole process down so that holding the key creates an X millisecond delay between it 'jumping'/moving by it's width. EDIT: Turns out gameTime.ElapsedGameTime.TotalMilliseconds is returning 0... all of the time. I have no idea why.

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • How to modularize a JSF/Facelets/Spring application with OSGi?

    - by lexicore
    I'm working with very large JSF/Facelets applications which use Spring for DI/bean management. My applications have modular structure and I'm currently looking for approaches to standardize the modularization. My goal is to compose a web application from a number of modules (possibly depending on each other). Each module may contain the following: Classes; Static resources (images, CSS, scripts); Facelet templates; Managed beans - Spring application contexts, with request, session and application-scoped beans (alternative is JSF managed beans); Servlet API stuff - servlets, filters, listeners (this is optional). What I'd like to avoid (almost at all costs) is the need to copy or extract module resources (like Facelets templates) to the WAR or to extend the web.xml for module's servlets, filters, etc. It must be enough to add the module (JAR, bundle, artifact, ...) to the web application (WEB-INF/lib, bundles, plugins, ...) to extend the web application with this module. Currently I solve this task with a custom modularization solution which is heavily based on using classpath resources: Special resources servlet serves static resources from classpath resources (JARs). Special Facelets resource resolver allows loading Facelet templates from classpath resources. Spring loads application contexts via the pattern classpath*:com/acme/foo/module/applicationContext.xml - this loads application contexts defined in module JARs. Finally, a pair of delegating servlets and filters delegate request processing to the servlets and filters configured in Spring application contexts from modules. Last days I read a lot about OSGi and I was considering, how (and if) I could use OSGi as a standardized modularization approach. I was thinking about how individual tasks could be solved with OSGi: Static resources - OSGi bundles which want to export static resources register a ResourceLoader instances with the bundle context. A central ResourceServlet uses these resource loaders to load resources from bundles. Facelet templates - similar to above, a central ResourceResolver uses services registered by bundles. Managed beans - I have no idea how to use an expression like #{myBean.property} if myBean is defined in one of the bundles. Servlet API stuff - use something like WebExtender/Pax Web to register servlets, filters and so on. My questions are: Am I inventing a bicycle here? Are there standard solutions for that? I've found a mentioning of Spring Slices but could not find much documentation about it. Do you think OSGi is the right technology for the described task? Is my sketch of OSGI application more or less correct? How should managed beans (especially request/session scope) be handled? I'd be generally graefult for your comments.

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >