Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 213/883 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Understanding memory leak in Android app.

    - by sat
    After going through few articles about performance, Not able to get this statement exactly. "When a Drawable is attached to a view, the view is set as a callback on the drawable" Soln: "Setting the stored drawables’ callbacks to null when the activity is destroyed." What does that mean, e.g. In my app , I initialize an imageButton in onCreate() like this, imgButton= (ImageButton) findViewById(R.id.imagebtn); At later stage, I get an image from an url, get the stream and convert that to drawable, and set image btn like this, imgButton.setImageDrawable(drawable); According to the above statement, when I am exiting my app, say in onDestroy() I have to set stored drawables’ callbacks to null, not able to understand this part ! In this simple case what I have to set as null ? I am using Android 2.2 Froyo, whether this technique is required, or not necessary.

    Read the article

  • What can we do to make XML processing faster?

    - by adpd
    We work on an internal corporate system that has a web front-end as one of its interfaces. The front-end (Java + Tomcat + Apache) communicates to the back-end (proprietary system written in a COBOL-like language) through SOAP web services. As a result, we pass large XML files back and forth. We believe that this architecture has a significant impact on performance due to the large overhead of XML transportation and parsing. Unfortunately, we are stuck with this architecture. How can we make this XML set-up more efficient? Any tips or techniques are greatly appreciated.

    Read the article

  • Why do these seemingly similar queries have such drastically different run times?

    - by Jherico
    I'm working with an oracle DB trying to tune some queries and I'm having trouble understanding why working a particular clause in a particular way has such a drastic impact on the query performance. Here is a performant version of the query I'm doing select * from ( select a.*, rownum rn from ( select * from table_foo ) a where rownum < 3 ) where rn >= 2 The same query by replacing the last two lines with this ) a where rownum >=2 rownum < 3 ) performs horribly. Several orders of magnitude worse ) a where rownum between 2 and 3 ) also performs horribly. I don't understand the magic from the first query and how to apply it to further similar queries.

    Read the article

  • How to write an unit test for WCF behaviors?

    - by katie77
    I am new to unit testing. How do I write a unit test for a method when I am extending a WCF behavior. Since I am not sure of when the class is being instantiated, or I can not change the method signature. In the behavior implementation, I am getting the header and looking up a value in the config. public class IncomingValidator : IDispatchMessageInspector { public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) { // Grab the header and see if one of the particular values(read from config) is there. } public void BeforeSendReply(ref Message reply, object correlationState) { } }

    Read the article

  • Which of these Array Initializations is better in Ruby?

    - by Bragaadeesh
    Hi, Which of these two forms of Array Initialization is better in Ruby? Method 1: DAYS_IN_A_WEEK = (0..6).to_a HOURS_IN_A_DAY = (0..23).to_a @data = Array.new(DAYS_IN_A_WEEK.size).map!{ Array.new(HOURS_IN_A_DAY.size) } DAYS_IN_A_WEEK.each do |day| HOURS_IN_A_DAY.each do |hour| @data[day][hour] = 'something' end end Method 2: DAYS_IN_A_WEEK = (0..6).to_a HOURS_IN_A_DAY = (0..23).to_a @data = {} DAYS_IN_A_WEEK.each do |day| HOURS_IN_A_DAY.each do |hour| @data[day] ||= {} @data[day][hour] = 'something' end end The difference between the first method and the second method is that the second one does not allocate memory initially. I feel the second one is a bit inferior when it comes to performance due to the numerous amount of Array copies that has to happen. However, it is not straight forward in Ruby to find what is happening. So, if someone can explain me which is better, it would be really great! Thanks

    Read the article

  • Using temporary arrays to cut down on code - inefficient?

    - by tommaisey
    I'm new to c++ (and SO) so sorry if this is obvious. I've started using temporary arrays in my code to cut down on repetition and to make it easier to do the same thing to multiple objects. So instead of: MyObject obj1, obj2, obj3, obj4; obj1.doSomming(arg); obj2.doSomming(arg); obj3.doSomming(arg); obj4.doSomming(arg); I'm doing: MyObject obj1, obj2, obj3, obj4; MyObject* objs[] = {&obj1, &obj2, &obj3, &obj4}; for (int i = 0; i !=4; ++i) objs[i]->doSomming(arg); Is this detrimental to performance? Like, does it cause unnecessary memory allocation? Is it good practice? Thanks.

    Read the article

  • Faster way of initializing arrays in Delphi

    - by Max
    I'm trying to squeeze every bit of performance in my Delphi application and now I came to a procedure which works with dynamic arrays. The slowest line in it is SetLength(Result, Len); which is used to initialize the dynamic array. When I look at the code for the SetLength procedure I see that it is far from optimal. The call sequence is as follows: _DynArraySetLength - DynArraySetLength DynArraySetLength gets the array length (which is zero for initialization) and then uses ReallocMem which is also unnecessary for initilization. I was doing SetLength to initialize dynamic array all the time. Maybe I'm missing something? Is there a faster way to do this?

    Read the article

  • How does Array.ForEach() compare to standard for loop in C#?

    - by DaveN59
    I pine for the days when, as a C programmer, I could type: memset( byte_array, '0xFF' ); and get a byte array filled with 'FF' characters. So, I have been looking for a replacement for this: for (int i=0; i < byteArray.Length; i++) { byteArray[i] = 0xFF; } Lately, I have been using some of the new C# features and have been using this approach instead: Array.ForEach<byte>(byteArray, b => b = 0xFF); Granted, the second approach seems cleaner and is easier on the eye, but how does the performance compare to using the first approach? Am I introducing needless overhead by using Linq and generics? Thanks, Dave

    Read the article

  • Dev Environment Tests Not 100% Compatible with Staging/Production in Rails

    - by aronchick
    We use a bunch of specific apps/APIs that (unfortunately) differ quite a bit from dev to staging/production. We use tests and continuous integration at each stage, but in dev, the tests fail annoyingly (throwing dialogs, etc - thanks Windows for the 64-bit notification!). I hate to write custom code, but are there some best practices for how to allow a subset of testing in ruby/rails - or for patching out specific tests when you're running on Windows? Some specific situations that: Identify.exe does not support 64-bit Windows and throws a dialog. Sethostname is not supported, and throws an error (at least it's command line).

    Read the article

  • Small and fast .NET programs? - 65% runtime in ResolvePolicy

    - by forki23
    Hi, I tried to build a very very small .NET app in F#. It just has to convert a small string into another string and print the result to the console like: convert.exe myString == prints something like "myConvertedString" I used dottrace to analyze the performance: 26% (168ms) in my actual string conversion (I thinks this is ok.) 65,80% (425ms) in ResolvePolicy in System.Security.SecurityManager A runtime 500ms on every execution is way too slow. Can I do something to improve this? It would be Ok if only the first call needs this time. Regards, forki

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • How do you manage the testing of your Android software on physical devices?

    - by Philip Regan
    I'm in charge of managing mobile application development at my company, and I am currently building a mobile device "library" for testing. Essentially, we want to have a representative device in-house for each of the OSes we are developing for, currently iOS (iPhone-only), Blackberry, and Android. Simulators only go so far, but I'm placing into the process a step to test software on the devices themselves. The problem we're finding is with Android. I don't think any of us here ever really understood just how fragmented the whole platform is until we started looking at devices to acquire. We are going to wait until v2.3 of Android is released, but which products to choose? Do we go by the most popular by market share? Do we get a small range of products by specs from least to most powerful overall? We're trying to avoid having to manage a dozen different devices to test each app, if not because of cost if only for the repeated time sink. How do you manage the testing of your Android software on physical devices?

    Read the article

  • Is '@' Error Suppression a Valid Technique for Testing for an Optional Array Key?

    - by MikeSchinkel
    Rarst and I were debating offline about the use of the '@' error suppression operator in PHP, specifically for use to test for existence of "optional" array keys, i.e. array keys that are being used as a switch here a their lack of existence in the array is functionally equivalent to the array having the key with a value equaling false. Here is pseudo-code for this scenario: function do_something( $args = array() ) { if ( @$args['switch'] ) { // Do something with this switch } // continue on... } vs. this approach: function do_something( $args = array() ) { if ( ! empty( $args['switch'] ) && $args['switch'] ) { // Do something with this switch } // continue on... } Of course in most use-cases, suppressing errors would not be A Good Thing(tm). However in this use-case where an array is passed with an optional element, it seems to me that it is actually a very good technique but I could be wrong and would like to hear other's opinions on the subject before I make up my mind. I do know that there are alleged performance hits for using the former approach but I'd like to know how they compare with the alternative and if they performance hits really matter in real world scenarios? P.S. I decided to post this because, after debating this offline with Rarst, he asked a more general question here on Programmers but didn't actually give a detailed example of the specific use-case we were debating. And since I'm pretty sure he'll want to use the out-of-context answers on that other question as justification for why the above is "bad" I decided I needed to get opinions on this specific use-case.

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to mock an SqlDataReader using Moq - Update

    - by Simon G
    Hi, I'm new to moq and setting up mocks so i could do with a little help. Title says it all really - how do I mock up an SqlDataReader using Moq? Thanks Update After further testing this is what I have so far: private IDataReader MockIDataReader() { var moq = new Mock<IDataReader>(); moq.Setup( x => x.Read() ).Returns( true ); moq.Setup( x => x.Read() ).Returns( false ); moq.SetupGet<object>( x => x["Char"] ).Returns( 'C' ); return moq.Object; } private class TestData { public char ValidChar { get; set; } } private TestData GetTestData() { var testData = new TestData(); using ( var reader = MockIDataReader() ) { while ( reader.Read() ) { testData = new TestData { ValidChar = reader.GetChar( "Char" ).Value }; } } return testData; } The issue you is when I do reader.Read in my GetTestData() method its always empty. I need to know how to do something like reader.Stub( x => x.Read() ).Repeat.Once().Return( true ) as per the rhino mock example: http://stackoverflow.com/questions/1792984/mocking-a-datareader-and-getting-a-rhino-mocks-exceptions-expectationviolationexc

    Read the article

  • Help getting frame rate (fps) up in Python + Pygame

    - by Jordan Magnuson
    I am working on a little card-swapping world-travel game that I sort of envision as a cross between Bejeweled and the 10 Days geography board games. So far the coding has been going okay, but the frame rate is pretty bad... currently I'm getting low 20's on my Core 2 Duo. This is a problem since I'm creating the game for Intel's March developer competition, which is squarely aimed at netbooks packing underpowered Atom processors. Here's a screen from the game: ![www.necessarygames.com/my_games/betraveled/betraveled-fps.png][1] I am very new to Python and Pygame (this is the first thing I've used them for), and am sadly lacking in formal CS training... which is to say that I think there are probably A LOT of bad practices going on in my code, and A LOT that could be optimized. If some of you older Python hands wouldn't mind taking a look at my code and seeing if you can't find any obvious areas for optimization, I would be extremely grateful. You can download the full source code here: http://www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip Compiled exe here: www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip One thing I am concerned about is my event manager, which I feel may have some performance wholes in it, and another thing is my rendering... I'm pretty much just blitting everything to the screen all the time (see the render routines in my game_components.py below); I recently found out that you should only update the areas of the screen that have changed, but I'm still foggy on how that accomplished exactly... could this be a huge performance issue? Any thoughts are much appreciated! As usual, I'm happy to "tip" you for your time and energy via PayPal. Jordan Here are some bits of the source: Main.py #Remote imports import pygame from pygame.locals import * #Local imports import config import rooms from event_manager import * from events import * class RoomController(object): """Controls which room is currently active (eg Title Screen)""" def __init__(self, screen, ev_manager): self.room = None self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.room = self.set_room(config.room) def set_room(self, room_const): #Unregister old room from ev_manager if self.room: self.room.ev_manager.unregister_listener(self.room) self.room = None #Set new room based on const if room_const == config.TITLE_SCREEN: return rooms.TitleScreen(self.screen, self.ev_manager) elif room_const == config.GAME_MODE_ROOM: return rooms.GameModeRoom(self.screen, self.ev_manager) elif room_const == config.GAME_ROOM: return rooms.GameRoom(self.screen, self.ev_manager) elif room_const == config.HIGH_SCORES_ROOM: return rooms.HighScoresRoom(self.screen, self.ev_manager) def notify(self, event): if isinstance(event, ChangeRoomRequest): if event.game_mode: config.game_mode = event.game_mode self.room = self.set_room(event.new_room) def render(self, surface): self.room.render(surface) #Run game def main(): pygame.init() screen = pygame.display.set_mode(config.screen_size) ev_manager = EventManager() spinner = CPUSpinnerController(ev_manager) room_controller = RoomController(screen, ev_manager) pygame_event_controller = PyGameEventController(ev_manager) spinner.run() # this runs the main function if this script is called to run. # If it is imported as a module, we don't run the main function. if __name__ == "__main__": main() event_manager.py #Remote imports import pygame from pygame.locals import * #Local imports import config from events import * def debug( msg ): print "Debug Message: " + str(msg) class EventManager: #This object is responsible for coordinating most communication #between the Model, View, and Controller. def __init__(self): from weakref import WeakKeyDictionary self.listeners = WeakKeyDictionary() self.eventQueue= [] self.gui_app = None #---------------------------------------------------------------------- def register_listener(self, listener): self.listeners[listener] = 1 #---------------------------------------------------------------------- def unregister_listener(self, listener): if listener in self.listeners: del self.listeners[listener] #---------------------------------------------------------------------- def post(self, event): if isinstance(event, MouseButtonLeftEvent): debug(event.name) #NOTE: copying the list like this before iterating over it, EVERY tick, is highly inefficient, #but currently has to be done because of how new listeners are added to the queue while it is running #(eg when popping cards from a deck). Should be changed. See: http://dr0id.homepage.bluewin.ch/pygame_tutorial08.html #and search for "Watch the iteration" for listener in list(self.listeners): #NOTE: If the weakref has died, it will be #automatically removed, so we don't have #to worry about it. listener.notify(event) #------------------------------------------------------------------------------ class PyGameEventController: """...""" def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.input_freeze = False #---------------------------------------------------------------------- def notify(self, incoming_event): if isinstance(incoming_event, UserInputFreeze): self.input_freeze = True elif isinstance(incoming_event, UserInputUnFreeze): self.input_freeze = False elif isinstance(incoming_event, TickEvent): #Share some time with other processes, so we don't hog the cpu pygame.time.wait(5) #Handle Pygame Events for event in pygame.event.get(): #If this event manager has an associated PGU GUI app, notify it of the event if self.ev_manager.gui_app: self.ev_manager.gui_app.event(event) #Standard event handling for everything else ev = None if event.type == QUIT: ev = QuitEvent() elif event.type == pygame.MOUSEBUTTONDOWN and not self.input_freeze: if event.button == 1: #Button 1 pos = pygame.mouse.get_pos() ev = MouseButtonLeftEvent(pos) elif event.type == pygame.MOUSEMOTION: pos = pygame.mouse.get_pos() ev = MouseMoveEvent(pos) #Post event to event manager if ev: self.ev_manager.post(ev) #------------------------------------------------------------------------------ class CPUSpinnerController: def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.clock = pygame.time.Clock() self.cumu_time = 0 self.keep_going = True #---------------------------------------------------------------------- def run(self): if not self.keep_going: raise Exception('dead spinner') while self.keep_going: time_passed = self.clock.tick() fps = self.clock.get_fps() self.cumu_time += time_passed self.ev_manager.post(TickEvent(time_passed, fps)) if self.cumu_time >= 1000: self.cumu_time = 0 self.ev_manager.post(SecondEvent()) pygame.quit() #---------------------------------------------------------------------- def notify(self, event): if isinstance(event, QuitEvent): #this will stop the while loop from running self.keep_going = False rooms.py #Remote imports import pygame #Local imports import config import continents from game_components import * from my_gui import * from pgu import high class Room(object): def __init__(self, screen, ev_manager): self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) def notify(self, event): if isinstance(event, TickEvent): pygame.display.set_caption('FPS: ' + str(int(event.fps))) self.render(self.screen) pygame.display.update() def get_highs_table(self): fname = 'high_scores.txt' highs_table = None config.all_highs = high.Highs(fname) if config.game_mode == config.TIME_CHALLENGE: if config.difficulty == config.EASY: highs_table = config.all_highs['time_challenge_easy'] if config.difficulty == config.MED_DIF: highs_table = config.all_highs['time_challenge_med'] if config.difficulty == config.HARD: highs_table = config.all_highs['time_challenge_hard'] if config.difficulty == config.SUPER: highs_table = config.all_highs['time_challenge_super'] elif config.game_mode == config.PLAN_AHEAD: pass return highs_table class TitleScreen(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() #Initialize #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=0) #Quit Button #--------------------------------------- b = StartGameButton(ev_manager=self.ev_manager) c.add(b, 0, 0) self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) class GameModeRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() self.create_gui() #Create pgu gui elements def create_gui(self): #Setup #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=-1) #Mode Relaxed Button #--------------------------------------- b = GameModeRelaxedButton(ev_manager=self.ev_manager) self.b = b print b.rect c.add(b, 0, 200) #Mode Time Challenge Button #--------------------------------------- b = TimeChallengeButton(ev_manager=self.ev_manager) self.b = b print b.rect c.add(b, 0, 250) #Mode Think Ahead Button #--------------------------------------- # b = PlanAheadButton(ev_manager=self.ev_manager) # self.b = b # print b.rect # c.add(b, 0, 300) #Initialize #--------------------------------------- self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) class GameRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) #Game mode #--------------------------------------- self.new_board_timer = None self.game_mode = config.game_mode config.current_highs = self.get_highs_table() self.highs_dialog = None self.game_over = False #Images #--------------------------------------- self.background = pygame.image.load('assets/images/interface/game screen2-1.jpg').convert() self.logo = pygame.image.load('assets/images/interface/logo_small.png').convert_alpha() self.game_over_text = pygame.image.load('assets/images/interface/text_game_over.png').convert_alpha() self.trip_complete_text = pygame.image.load('assets/images/interface/text_trip_complete.png').convert_alpha() self.zoom_game_over = None self.zoom_trip_complete = None self.fade_out = None #Text #--------------------------------------- self.font = pygame.font.Font(config.font_sans, config.interface_font_size) #Create game components #--------------------------------------- self.continent = self.set_continent(config.continent) self.board = Board(config.board_size, self.ev_manager) self.deck = Deck(self.ev_manager, self.continent) self.map = Map(self.continent) self.longest_trip = 0 #Set pos of game components #--------------------------------------- board_pos = (SCREEN_MARGIN[0], 109) self.board.set_pos(board_pos) map_pos = (config.screen_size[0] - self.map.size[0] - SCREEN_MARGIN[0], 57); self.map.set_pos(map_pos) #Trackers #--------------------------------------- self.game_clock = Chrono(self.ev_manager) self.swap_counter = 0 self.level = 0 #Create gui #--------------------------------------- self.create_gui() #Create initial board #--------------------------------------- self.new_board = self.deck.deal_new_board(self.board) self.ev_manager.post(NewBoardComplete(self.new_board)) def set_continent(self, continent_const): #Set continent based on const if continent_const == config.EUROPE: return continents.Europe() if continent_const == config.AFRICA: return continents.Africa() else: raise Exception('Continent constant not recognized') #Create pgu gui elements def create_gui(self): #Setup #--------------------------------------- self.gui_form = gui.Form() self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=-1,valign=-1) #Timer Progress bar #--------------------------------------- self.timer_bar = None self.time_increase = None self.minutes_left = None self.seconds_left = None self.timer_text = None if self.game_mode == config.TIME_CHALLENGE: self.time_increase = config.time_challenge_start_time self.timer_bar = gui.ProgressBar(config.time_challenge_start_time,0,config.max_time_bank,width=306) c.add(self.timer_bar, 172, 57) #Connections Progress bar #--------------------------------------- self.connections_bar = None self.connections_bar = gui.ProgressBar(0,0,config.longest_trip_needed,width=306) c.add(self.connections_bar, 172, 83) #Quit Button #--------------------------------------- b = QuitButton(ev_manager=self.ev_manager) c.add(b, 950, 20) #Generate Board Button #--------------------------------------- b = GenerateBoardButton(ev_manager=self.ev_manager, room=self) c.add(b, 500, 20) #Board Size? #--------------------------------------- bs = SetBoardSizeContainer(config.BOARD_LARGE, ev_manager=self.ev_manager, board=self.board) c.add(bs, 640, 20) #Fill Board? #--------------------------------------- t = FillBoardCheckbox(config.fill_board, ev_manager=self.ev_manager) c.add(t, 740, 20) #Darkness? #--------------------------------------- t = UseDarknessCheckbox(config.use_darkness, ev_manager=self.ev_manager) c.add(t, 840, 20) #Initialize #--------------------------------------- self.gui_app.init(c) def advance_level(self): self.level += 1 print 'Advancing to next level' print 'New level: ' + str(self.level) if self.timer_bar: print 'Time increase: ' + str(self.time_increase) self.timer_bar.value += self.time_increase self.time_increase = max(config.min_advance_time, int(self.time_increase * 0.9)) self.board = self.new_board self.new_board = None self.zoom_trip_complete = None self.game_clock.unpause() def notify(self, event): #Tick event if isinstance(event, TickEvent): pygame.display.set_caption('FPS: ' + str(int(event.fps))) self.render(self.screen) pygame.display.update() #Wait to deal new board when advancing levels if self.zoom_trip_complete and self.zoom_trip_complete.finished: self.zoom_trip_complete = None self.ev_manager.post(UnfreezeCards()) self.new_board = self.deck.deal_new_board(self.board) self.ev_manager.post(NewBoardComplete(self.new_board)) #New high score? if self.zoom_game_over and self.zoom_game_over.finished and not self.highs_dialog: if config.current_highs.check(self.level) != None: self.zoom_game_over.visible = False data = 'time:' + str(self.game_clock.time) + ',swaps:' + str(self.swap_counter) self.highs_dialog = HighScoreDialog(score=self.level, data=data, ev_manager=self.ev_manager) self.highs_dialog.open() elif not self.fade_out: self.fade_out = FadeOut(self.ev_manager, config.TITLE_SCREEN) #Second event elif isinstance(event, SecondEvent): if self.timer_bar: if not self.game_clock.paused: self.timer_bar.value -= 1 if self.timer_bar.value <= 0 and not self.game_over: self.ev_manager.post(GameOver()) self.minutes_left = self.timer_bar.value / 60 self.seconds_left = self.timer_bar.value % 60 if self.seconds_left < 10: leading_zero = '0' else: leading_zero = '' self.timer_text = ''.join(['Time Left: ', str(self.minutes_left), ':', leading_zero, str(self.seconds_left)]) #Game over elif isinstance(event, GameOver): self.game_over = True self.zoom_game_over = ZoomImage(self.ev_manager, self.game_over_text) #Trip complete event elif isinstance(event, TripComplete): print 'You did it!' self.game_clock.pause() self.zoom_trip_complete = ZoomImage(self.ev_manager, self.trip_complete_text) self.new_board_timer = Timer(self.ev_manager, 2) self.ev_manager.post(FreezeCards()) print 'Room posted newboardcomplete' #Board Refresh Complete elif isinstance(event, BoardRefreshComplete): if event.board == self.board: print 'Longest trip needed: ' + str(config.longest_trip_needed) print 'Your longest trip: ' + str(self.board.longest_trip) if self.board.longest_trip >= config.longest_trip_needed: self.ev_manager.post(TripComplete()) elif event.board == self.new_board: self.advance_level() self.connections_bar.value = self.board.longest_trip self.connection_text = ' '.join(['Connections:', str(self.board.longest_trip), '/', str(config.longest_trip_needed)]) #CardSwapComplete elif isinstance(event, CardSwapComplete): self.swap_counter += 1 elif isinstance(event, ConfigChangeBoardSize): config.board_size = event.new_size elif isinstance(event, ConfigChangeCardSize): config.card_size = event.new_size elif isinstance(event, ConfigChangeFillBoard): config.fill_board = event.new_value elif isinstance(event, ConfigChangeDarkness): config.use_darkness = event.new_value def render(self, surface): #Background surface.blit(self.background, (0, 0)) #Map self.map.render(surface) #Board self.board.render(surface) #Logo surface.blit(self.logo, (10,10)) #Text connection_text = self.font.render(self.connection_text, True, BLACK) surface.blit(connection_text, (25, 84)) if self.timer_text: timer_text = self.font.render(self.timer_text, True, BLACK) surface.blit(timer_text, (25, 64)) #GUI self.gui_app.paint(surface) if self.zoom_trip_complete: self.zoom_trip_complete.render(surface) if self.zoom_game_over: self.zoom_game_over.render(surface) if self.fade_out: self.fade_out.render(surface) class HighScoresRoom(Room): def __init__(self, screen, ev_manager): Room.__init__(self, screen, ev_manager) self.background = pygame.image.load('assets/images/interface/background.jpg').convert() #Initialize #--------------------------------------- self.gui_app = gui.App(config.gui_theme) self.ev_manager.gui_app = self.gui_app c = gui.Container(align=0,valign=0) #High Scores Table #--------------------------------------- hst = HighScoresTable() c.add(hst, 0, 0) self.gui_app.init(c) def render(self, surface): surface.blit(self.background, (0, 0)) #GUI self.gui_app.paint(surface) game_components.py #Remote imports import pygame from pygame.locals import * import random import operator from copy import copy from math import sqrt, floor #Local imports import config from events import * from matrix import Matrix from textrect import render_textrect, TextRectException from hyphen import hyphenator from textwrap2 import TextWrapper ############################## #CONSTANTS ############################## SCREEN_MARGIN = (10, 10) #Colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) YELLOW = (255, 200, 0) #Directions LEFT = -1 RIGHT = 1 UP = 2 DOWN = -2 #Cards CARD_MARGIN = (10, 10) CARD_PADDING = (2, 2) #Card types BLANK = 0 COUNTRY = 1 TRANSPORT = 2 #Transport types PLANE = 0 TRAIN = 1 CAR = 2 SHIP = 3 class Timer(object): def __init__(self, ev_manager, time_left): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time_left = time_left self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time_left -= 1 class Chrono(object): def __init__(self, ev_manager, start_time=0): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time = start_time self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time += 1 class Map(object): def __init__(self, continent): self.map_image = pygame.image.load(continent.map).convert_alpha() self.map_text = pygame.image.load(continent.map_text).convert_alpha() self.pos = (0, 0) self.set_color() self.map_image = pygame.transform.smoothscale(self.map_image, config.map_size) self.size = self.map_image.get_size() def set_pos(self, pos): self.pos = pos def set_color(self): image_pixel_array = pygame.PixelArray(self.map_image) image_pixel_array.replace(config.GRAY1, config.COLOR1) image_pixel_array.replace(config.GRAY2, config.COLOR2) image_pixel_array.replace(config.GRAY3, config.COLOR3) image_pixel_array.replace(config.GRAY4, config.COLOR4) image_pixel_array.replace(config.GRAY5, config.COLOR5)

    Read the article

  • How much multiple style sheets slow down to website?

    - by metal-gear-solid
    Here is 3 css file (one is only for IE) <link rel="stylesheet" href="css/blueprint/screen.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/print.css" type="text/css" media="print"> <!--[if lt IE 8]><link rel="stylesheet" href="css/blueprint/ie.css" type="text/css" media="screen, projection"><![endif]--> If i keep divide scree.css into these css in my website Now it will be 6 css ( one is only for IE) <link rel="stylesheet" href="css/blueprint/reset.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/grid.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/typography.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/forms.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/print.css" type="text/css" media="print"> <!--[if lt IE 8]><link rel="stylesheet" href="css/blueprint/ie.css" type="text/css" media="screen, projection"><![endif]--> If i go for method two for a website even after production . Does it really slowdown the website page loading speed? if yes then how much? How much these 3 extra style sheet will affect site performance?

    Read the article

  • Assert.AreEqual() Exception in VS2010

    - by Tom Miller
    I am fairly new to unit testing and am using VS2010 to develop in and run my tests. I have a simple test, illustrated below, that simply compares 2 System.Data.DataTableReader objects. I know that they are equal as they are both created using the same object types, the same input file and I have verified that the objects "look" the same. I realize I may be dealing with a couple of issues, one being whether or not this is the proper use of Assert.AreEqual or even the proper way to test this scenario, and the other being the main issue I am dealing with which is why this test fails with this exception: Failed 00:00:00.1000660 0 Assert.AreEqual failed. Expected:<System.Data.DataTableReader>. Actual:<System.Data.DataTableReader>. Here is the unit test code that is failing: public void EntriesTest() { AuditLog target = new AuditLog(); target.Init(); DataSet ds = new DataSet(); ds.ReadXml(TestContext.DataRow["AuditLogPath"].ToString()); DataTableReader expected = ds.Tables[0].CreateDataReader(); DataTableReader actual = target.Entries.Tables[0].CreateDataReader(); Assert.AreEqual<DataTableReader>(expected, actual); } Any help would be greatly appreciated!

    Read the article

  • How can I beta test web Perl modules under Apache/mod_perl on production web server?

    - by DVK
    We have a setup where most code, before being promoted to full production, is deployed in BETA mode - meaning, it runs in full production environment (using production database - usually production data; and production web server). We call that stage BETA testing. One of the main requirements is that BETA code promotion to production must be a simple "cp" command from beta to production directory - no code/filename changes. For non-web Perl code, achieving seamless BETA test is quite doable (see details here): Perl programs live in a standard location under production root (/usr/code/scripts) with production perl modules living under the same root (/usr/code/lib/perl) The BETA code has 100% same code paths except under beta root (/usr/code/beta/) A special module manipulates @INC of any script based on whether the script was called from /usr/code/scripts or /usr/code/test/scripts, to include beta libraries for beta scripts. This setup works fine up till we need to beta test our web Perl code (the setup is EmbPerl and Apache/mod_perl). The hang-up is as follows: if both a production Perl module and BETA Perl module have the same name (e.g. /usr/code/lib/perl/MyLib1.pm and /usr/code/beta/lib/perl/MyLib1.pm), then mod_perl will only be able to load ONE of these modules into memory - and there's no way we are aware of for a particular web page to affect which version of the module is currently loaded due to concurrency issues. Leaving aside the obvious non-programming solution (get a bloody BETA web server) which for political/organizational reasons is not feasible, is there any way we can somehow hack around this problem in either Perl or mod_perl? I played around with various approaches to unloading Perl modules that %INC has listed, but the problem remains that another user might load a beta page at just the right (or rather wrong) moment and have the beta module loaded which will be used for my production page.

    Read the article

  • Stresstesting ASP.NET/IIS with WCAT

    - by MartinHN
    I'm trying to setup a stress/load test using the WCAT toolkit included in the IIS Resources. Using LogParser, I've processed a UBR file with configuration. It looks something like this: [Configuration] NumClientMachines: 1 # number of distinct client machines to use NumClientThreads: 100 # number of threads per machine AsynchronousWait: TRUE # asynchronous wait for think and delay Duration: 5m # length of experiment (m = minutes, s = seconds) MaxRecvBuffer: 8192K # suggested maximum received buffer ThinkTime: 0s # maximum think-time before next request WarmupTime: 5s # time to warm up before taking statistics CooldownTime: 6s # time to cool down at the end of the experiment [Performance] [Script] SET RequestHeader = "Accept: */*\r\n" APP RequestHeader = "Accept-Language: en-us\r\n" APP RequestHeader = "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705)\r\n" APP RequestHeader = "Host: %HOST%\r\n" NEW TRANSACTION classId = 1 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 45117 verb = "GET" URL = "http://Url1.com" NEW TRANSACTION classId = 3 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 13662 verb = "GET" URL = "http://Url1.com/test.aspx" Does it look OK? I execute the controller with this command: wcctl -z StressTest.ubr -a localhost The Client(s) is executed like this: wcclient localhost When the client is executed, I get this error: main client thread Connect Attempt 0 Failed. Error = 10061 Has anyone in this world ever used WCAT?

    Read the article

  • How should I Test a Genetic Algorithm

    - by James Brooks
    I have made a quite few genetic algorithms; they work (they find a reasonable solution quickly). But I have now discovered TDD. Is there a way to write a genetic algorithm (which relies heavily on random numbers) in a TDD way? To pose the question more generally, How do you test a non-deterministic method/function. Here is what I have thought of: Use a specific seed. Which wont help if I make a mistake in the code in the first place but will help finding bugs when refactoring. Use a known list of numbers. Similar to the above but I could follow the code through by hand (which would be very tedious). Use a constant number. At least I know what to expect. It would be good to ensure that a dice always reads 6 when RandomFloat(0,1) always returns 1. Try to move as much of the non-deterministic code out of the GA as possible. which seems silly as that is the core of it's purpose. Links to very good books on testing would be appreciated too.

    Read the article

  • Where are the factory_girl records?

    - by gmile
    I'm trying to perform an integration test via Watir and RSpec. So, I created a test file within /integration and wrote a test, which adds a test user into a base via factory_girl. The problem is — I can't actually perform a login with my test user. The test I wrote looks as following: ... before(:each) @user = Factory(:user) @browser = FireWatir::Firefox.new end it "should login" @browser.text_field(:id, "username").set(@user.username) @browser.text_field(:id, "password").set(@user.password) @browser.button(:id, "get_in").click end ... As I'm starting the test and see a "performance" in browser, it always fires up a Username is not valid error. I've started an investigation, and did a small trick. First of all I've started to have doubts if the factory actually creates the user in DB. So after the immediate call to factory I've put some puts User.find stuff only to discover that the user is actually in DB. Ok, but as user still couldn't have logged in I've decided to see if he's present in DB with my own eyes. I've added a sleep right after a factory call, and went to see what's in the DB at the moment. I was crushed to see that the user is actually missing there! How come? Still, when I'm trying to output a user within the code, he is actually being fetched from somewhere. So where does the records, made by factory_girl within a runtime lie? Is it test or dev DB? I don't get it. I've 10 times checked if I'm running my Mongrel in test mode (does it matter? I think it does, as I'm trying to tun an integration test) and if my database.yml holds the correct connection specific data. I'm using an authlogic, if that can give any clue (no, putting activate_authlogic doesn't work here).

    Read the article

  • Problem with Authlogic and Unit/Functional Tests in Rails

    - by mmacaulay
    I'm learning how unit testing is done in Rails, and I've run into a problem involving Authlogic. According to the Documentation there are a few things required to use Authlogic stuff in your tests: test_helper.rb: require "authlogic/test_case" class ActiveSupport::TestCase setup :activate_authlogic end Then in my functional tests I can login users: UserSession.create(users(:tester)) The problem seems to stem from the setup :activate_authlogic line in test_helper.rb, whenever that is included, I get the following errors when running functional tests: NoMethodError: undefined method `request=' for nil:NilClass authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `send' authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `method_missing' If I remove setup :activate_authlogic and add instead Authlogic::Session::Base.controller = Authlogic::ControllerAdapters::RailsAdapter.new(self) to test_helper.rb, my functional tests seem to work but now my unit tests fail: NoMethodError: undefined method `params' for ActiveSupport::TestCase:Class authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:30:in `params' authlogic (2.1.3) lib/authlogic/session/params.rb:96:in `params_credentials' authlogic (2.1.3) lib/authlogic/session/params.rb:72:in `params_enabled?' authlogic (2.1.3) lib/authlogic/session/params.rb:66:in `persist_by_params' authlogic (2.1.3) lib/authlogic/session/callbacks.rb:79:in `persist' authlogic (2.1.3) lib/authlogic/session/persistence.rb:55:in `persisting?' authlogic (2.1.3) lib/authlogic/session/persistence.rb:39:in `find' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:96:in `get_session_information' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `each' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `get_session_information' /test/unit/user_test.rb:23:in `test_should_save_user_with_email_password_and_confirmation' What am I doing wrong?

    Read the article

  • How set EnqueueCallBack to my generic callback

    - by CrazyJoe
    using System; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsistec.Domain; using Microsistec.Client; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Collections.Generic; using Microsistec.Tools; using System.Json; using Microsistec.SystemConfig; using System.Threading; using Microsoft.Silverlight.Testing; namespace Test { [TestClass] public class SampleTest : SilverlightTest { [TestMethod, Asynchronous] public void login() { List<PostData> data = new List<PostData>(); data.Add(new PostData("email", "xxx")); data.Add(new PostData("password", MD5.GetHashString("xxx"))); WebClient.sendData(Config.DataServerURL + "/user/login", data, LoginCallBack); EnqueueCallback(?????????); EnqueueTestComplete(); } [Asynchronous] public void LoginCallBack(object sender, System.Net.UploadStringCompletedEventArgs e) { string json = Microsistec.Client.WebClient.ProcessResult(e); var result = JsonArray.Parse(json); Assert.Equals("1", result["value"].ToString()); TestComplete(); } } Im tring to set ???????? value but my callback is generic, it is setup on my WebClient .SendData, how i implement my EnqueueCallback to a my already functio LoginCallBack???

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >