Search Results

Search found 8354 results on 335 pages for 'welton v3 50'.

Page 159/335 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • How to refactor a myriad of similar classes

    - by TobiMcNamobi
    I'm faced with similar classes A1, A2, ..., A100. Believe it or not but yeah, there are roughly hundred classes that almost look the same. None of these classes are unit tested (of course ;-) ). Each of theses classes is about 50 lines of code which is not too much by itself. Still this is way too much duplicated code. I consider the following options: Writing tests for A1, ..., A100. Then refactor by creating an abstract base class AA. Pro: I'm (near to totally) safe by the tests that nothing goes wrong. Con: Much effort. Duplication of test code. Writing tests for A1, A2. Abstracting the duplicated test code and using the abstraction to create the rest of the tests. Then create AA as in 1. Pro: Less effort than in 1 but maintaining a similar degree of safety. Con: I find generalized test code weird; it often seems ... incoherent (is this the right word?). Normally I prefer specialized test code for specialized classes. But that requires a good design which is my goal of this whole refactoring. Writing AA first, testing it with mock classes. Then inheriting A1, ..., A100 successively. Pro: Fastest way to eliminate duplicates. Con: Most Ax classes look very much the same. But if not, there is the danger of changing the code by inheriting from AA. Other options ... At first I went for 3. because the Ax classes are really very similar to each other. But now I'm a bit unsure if this is the right way (from a unit testing enthusiast's perspective).

    Read the article

  • suggestions for lonewolf dev setup

    - by d33j
    I'm looking for some suggestions for a better development setup. Background: I'm a crusty old software engineer (mostly java of late) and I have around 50 - 100 incomplete java projects scattered everywhere, usb keys, HDDs, and spanning across 5 or 6 computers etc, which have been put on hold for a few years (ie: family). I have no version control at home. I've been using IntelliJ for around 10 years, so that's the only constant. I'm thinking of nominating one machine as a headless server to put all my projects on, maybe a ubuntu box, that way It won't matter which device I'm on, all my projects can be accessed (and I don't have to waste time actually looking for them). I don't need to access code over the net. These are my own 'happy place' projects so I only work on them when I'm at home, however I can see the benefit of the tasking app being online, that way if I think of something while on public transport lets say, I can add it then & there, but it's not a requirement. I can wait until I get home to create tasks. Summary: So I need some sort of version control so I can rollback mistakes, and some sort of simple tasking software where I can assign tasks for myself later on when I get time. I use Subversion, Sonar, Jira and Crucible at work but I think it's a little bit of an overkill for me though. What do you suggest?

    Read the article

  • Why does 12.04 try but fail to hibernate, even after I enabled hibernation?

    - by Roger Davis
    Regarding the below (-----) info from another post, my system will try (as is said below) to hibernate, but it won't get all the way there. Hard drive activity stops, but it does not shut down. If I turn the power off, then back on, it will start, but I have to "restore previous session" in the browser and other open apps don't restart, with the accompanying hassle. So because it tries, will the suggested fix then cause it to work correctly, or is the literal "try" not really what he means?!? PLEASE NOTE - this is a desktop system, not a laptop. Before enabling hibernation, please try to test whether it works correctly by running pm-hibernate in a terminal. The system will try to hibernate. If you are able to start the system again then you are more or less safe to add an override. To do so, start editing sudo nano /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla Fill it with this [Re-enable hibernate by default] Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes Save by pressing Ctrl-O and exit nano by pressing Ctrl-X Restart and hibernation is back!

    Read the article

  • Where did the notion of "one return only" come from?

    - by FredOverflow
    I often talk to Java programmers who say "Don't put multiple return statements in the same method." When I ask them to tell me the reasons why, all I get is "The coding standard says so." or "It's confusing." When they show me solutions with a single return statement, the code looks uglier to me. For example: if (blablabla) return 42; else return 97; "This is ugly, you have to use a local variable!" int result; if (blablabla) result = 42; else result = 97; return result; How does this 50% code bloat make the program any easier to understand? Personally, I find it harder, because the state space has just increased by another variable that could easily have been prevented. Of course, normally I would just write: return (blablabla) ? 42 : 97; But the conditional operator gets even less love among Java programmers. "It's incomprehensible!" Where did this notion of "one return only" come from, and why do people adhere to it rigidly?

    Read the article

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • Oracle Customer Experience (CX) Solutions Make Retailers Merry

    - by Tuula Fai
    Tis the season to be jolly. If you’re a retailer, your level of jolliness depends on sales. So you watch trends like U.S. store traffic increasing 3.5% to 308 million on Black Friday but sales actually falling 1.8% to $11.2 billion. Fortunately, by the end of November, retail sales were up 3.7% over the previous year, thanks to life recovering after Hurricane Sandy. And online sales topped $1 billion for the first time ever! Who are the companies improving their sales online? They are big names like Walgreen’s Drugstore.com, Nordstrom’s HauteLook, and Intuit. More importantly, how are they doing it? They use cutting-edge business practices enabled by Oracle’s CX Cloud Service & Support solutions to: Increase conversions rates and order sizes (Customer Acquisition) Enhance customer satisfaction and loyalty (Customer Retention) Reduce contact center costs and improve agent productivity (Operational Efficiency). Acquisition + Retention + Operational Efficiency = Sustainable Growth and Profits. That’s the magic formula for retail customer service success. Don’t take our word for it. Look at the results of these Oracle customers: Walgreen’s Drugstore—30% sales conversion rate on chat sessions with 20% increase in shopping cart size Nordstrom’s HauteLook—40,000+ interactions per month—20% growth over last year— efficiently managed by 40 agents, with no increase in IT costs Intuit—50% increase in customer satisfaction and 70% decrease in cost per interaction Using Oracle’s CX Cloud & Service solutions, these retailers deliver consistent, relevant, and personalized experiences across all touchpoints, including social, mobile, and web. Their ability to connect with customers anytime, anywhere—providing the right answer at the right time—helps them create a defensible advantage in the marketplace. Want to learn more? Please visit http://www.oracle.com/goto/cloudlaunchpad for free resources on delivering exceptional customer service in the Cloud. Also, watch our YouTube channel to learn more about seamless multichannel retail and Winston Furnishings’ exceptional customer experience.

    Read the article

  • Genetic Algorithm new generation exponentially increasing

    - by Rdz
    I'm programming Genetic Algorithm in C++ and after searching all kind of ways of doing GA'a operators (selection, crossover, mutation) I came up with a doubt. Let's say I have an initial population of 500. My selection will consist in getting the top 20% of 500(based on best fitness). So I get 100 individuals to mate. When I do the crossover I'll get 2 children where both together have 50% of surviving. So far so good. I start the mutation, and everything's ok.. Now when I start choosing the Next generation, I see that I have a big number of children (in this case, 4950 if you wanna know). Now the thing is, every time I run GA, if I send all the children to the next generation, the number of individuals per generation will increase exponentially. So there must be a way of choosing the children to fulfill a new generation without getting out of this range of the initial population. What I'm asking here is if there is anyway of choosing the children to fill the new generations OR should I choose somehow (and maybe reduce) the parents to mate so I don't get so many children in the end. Thanks :)

    Read the article

  • How do I separate model positions from view positions in MVC?

    - by tieTYT
    Using MVC in games (as opposed to web apps) always confuses me when it comes to the view. How am I supposed to keep the model agnostic of how the view is presenting things? I always end up giving the Model a position that holds x and y but invariably, these values end up being in units of pixels and that feels wrong. I can see the advantage* of avoiding that but how am I supposed to? This idea was suggested: Don't think of it in units of pixels, think of them in arbitrary distance units that just happen map to pixels at a 1:1 ratio. Oh, the resolution is half of what it was? We are now taking the x/y coordinates at 50% value for screen display, and your spells casting range is still 300 units long, which now is 150 pixels. But those numbers conveniently work out. What do I do if the numbers divide in such a way that I get decimal places? Floating points are unsafe. I think allowing decimal places would eventually cause really weird bugs in my game. *It'd let me write the model once and write different views depending on the device.

    Read the article

  • SQL Bits 7 - 30th September - 2nd October 2010 in York

    In case you haven't heard we are planning the next SQL Bits event, and today we have released the agenda for Friday & Saturday, a total of 50 sessions covering all aspects of SQL Server with a great selection of speakers. http://www.sqlbits.com/information/Agenda.aspx From our recent announcement - ...SQLBits 7 will take place over three days from Thursday September 30th to Saturday October 2nd in York. Day one will be a training day, featuring in-depth full day seminars by leading SQL Server professionals such as Chris Testa-O’Neill and Chris Webb (see http://www.sqlbits.com/information/TrainingDay.aspx for more details); day two will be a deep-dive conference day with advanced sessions delivered by the best speakers from the SQL Server community; and day three will be the traditional SQLBits community conference day, with a wide range of sessions covered all aspects of SQL Server at all levels of ability. There will be a charge to attend days one and two, but day three, Saturday October 2nd, will as usual be completely free to attend allowing everyone to attend and experience a great day of training even if they have no training budget. Full details available at http://www.sqlbits.com.

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • Internet unusably slow with Realtek Semiconductor Co., Ltd. RTL8111/8168B card

    - by user42424
    So I have recently installed Ubuntu 11.10 for a dual boot with wind 7. After the install I had like 300 updates, so I installed them. At first I could use the internet, although it was extremely slow. However now I cannot, sometimes it will load and others it will simply time out. When I try to download something it will either take forever or will not at all. This is a wired system. On Windows side my speeds are fine. Any help would be greatly appreciated. Also like I said I am new to Linux/Ubuntu so please be nice. One last thing, I also installed 11.10 for same dual boot on my laptop, and wireless speed is the same as on Windows? Only the wired desktop gives me the problem? Hear is some hardware info.. Hope it helps. Mobo: Gigabyte GA=880GMA- AMD / CPU: AMD Phenom (tm) IIx4 965 / 16 GB Ram / Realtek PCIe GBE Family Controller / Cisco Linksys E2000 / Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) / eth0 Link encap:Ethernet HWaddr 50:e5:49:33:64:cf inet addr:192.168.1.118 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe33:64cf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76722 errors:0 dropped:76722 overruns:0 frame:76722 TX packets:49692 errors:0 dropped:65 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107956638 (107.9 MB) TX bytes:4342553 (4.3 MB) Interrupt:44 Base address:0x2000 thanks to roadmr problem solved! I powered down PC, un plugged power from pc end, waited a few (maybe 3)minutes. plugged power back in, pushed and held power button for 30 + seconds. Let go, powered on PC, and my Internet is fine! downloads and web speed blaze, just like on my Win 7 boot, maybe even faster. Problem Solved, Thanks to all!! **

    Read the article

  • Is there a way to make catalyst driver work in Trusty for the radeon hd4330?

    - by Laurent BERNABE
    Though official Catalyst software 13.1 is suitable for ati radeon hd4330, it can't be installed on Ubuntu 14.04 as it can't support Xorg = 7.6 As I need proprietary drivers for trusty, I would like to know if there is a way to bypass this limitation ? (For example by fetching driver sources) Here are some results from the terminal : $ Xorg -version X.Org X Server 1.15.1 Release Date: 2014-04-13 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu Current Operating System: Linux bordeaux80 3.13.0-27-generic #50-Ubuntu SMP Thu May 15 18:06:16 UTC 2014 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-27-generic root=UUID=4015e6f7-d11a-45fd-ac9b-5b6c7ab9eaa0 ro quiet splash vt.handoff=7 Build Date: 16 April 2014 01:36:29PM xorg-server 2:1.15.1-0ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. $ xrandr Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 LVDS connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 353mm x 198mm 1366x768 60.0*+ 1280x720 59.9 1152x768 59.8 1024x768 59.9 800x600 59.9 848x480 59.7 720x480 59.7 640x480 59.4 VGA-0 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) $ uname -rp 3.13.0-27-generic x86_64 $ glxinfo | grep OpenGL OpenGL vendor string: X.Org OpenGL renderer string: Gallium 0.4 on AMD RV710 OpenGL core profile version string: 3.1 (Core Profile) Mesa 10.1.0 OpenGL core profile shading language version string: 1.40 OpenGL core profile context flags: (none) OpenGL core profile extensions: OpenGL version string: 3.0 Mesa 10.1.0 OpenGL shading language version string: 1.30 OpenGL context flags: (none) OpenGL extensions: Regards

    Read the article

  • What "file system" is supported by Windows and Linux?

    - by Skiroid
    I'm setting up a media centre for my living room so that I'm able to watch downloaded films and TV shows on the big screen. The media centre is an old small computer which will have XBMCbuntu 12 installed on it. Right now, the media centre has a 300 GB HDD partitioned into two: 1) Ext4 50 GB (where I'll install the OS) 2) swap 6 GB (swap area) I'm wanting a third partition which I can store all my media on to. This partition will fill the rest of my HDD. Although, I'm stuck on which file system I should set it to. I need the file system to be fully compatible with Windows as I'm going to be removing the HDD from the media centre and plugging it into my main PC, running Windows 8, to transfer the media onto it. I can't transfer over Wi-Fi as the media centre won't be connected to the Internet. My options are: Ext4 journaling, Ext3 journaling, Ext2 journaling, ReiserFS journaling, btrfs journaling, JFS journaling, XFS journaling, FAT16 and FAT32. I know that FAT32 is compatible with Windows but it can only hold files that are 4 GB or less and my films are well over 4 GB. Some more than 10 GB. Is there a file system I can use which is supported by Linux and pops up under Computer in Windows?

    Read the article

  • No sound after upgrading to Ubuntu 11.10 from win7

    - by Tilman
    just as a prefix to my question, i'd like to note that i'm just now entering the world of Linux (unless you count my android, but that's a very different experience...) i have two computers now that run Ubuntu 11.10, the first of which i've had very little problems with, aside from figuring out the basics. the second, from which i'm writing this question, has (up to this point) only had one problem.... no sound. i've read a couple questions similiar and found little help as the component catalog doesn't have my computer listed. (in fact i'm not suprised this is a pos i had my mom grab from her work before they officially closed the doors behind them) had perfect sound before hand, and no sound now. sudo lspci -v brings up 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) Subsystem: Intel Corporation Device d608 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at ff980000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel any help would be greatly appreciated, me and my gf just wanna watch a damn movie lol

    Read the article

  • Can't configure 5.1 audio with 12.04

    - by xster
    I have an Intel ALC892 and a Nvidia GT 520m connected to speakers via HDMI. On lspci, I see 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 02) Subsystem: ZOTAC International (MCO) Ltd. Device a218 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at db400000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel 02:00.1 Audio device: NVIDIA Corporation HDMI Audio stub (rev a1) Subsystem: ZOTAC International (MCO) Ltd. Device 2180 Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at db080000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Kernel driver in use: snd_hda_intel My alsamixer looks like I enabled pulseaudio configuration file to have 6 channels. My sound setting looks like When I use the test dialog, only front left and right have sounds. If I use alsa in XBMC on a 5.1 video, there's no sound. If I use pulseaudio, only front right and left have sound. I can barely hear any speech since I'm guessing it's mapped to front center. Any clues?

    Read the article

  • Collision detection between a sprite and rectangle in canvas

    - by Andy
    I'm building a Javascript + canvas game which is essentially a platformer. I have the player all set up and he's running, jumping and falling, but I'm having trouble with the collision detection between the player and blocks (the blocks will essentially be the platforms that the player moves on). The blocks are stored in an array like this: var blockList = [[50, 400, 100, 100]]; And drawn to the canvas using this: this.draw = function() { c.fillRect(blockList[0][0], blockList[0][1], 100, 100); } I'm checking for collisions using something along these lines in the player object: this.update = function() { // Check for collitions with blocks for(var i = 0; i < blockList.length; i++) { if((player.xpos + 34) > blockList[i][0] && player.ypos > blockList[i][1]) { player.xpos = blockList[i][0] - 28; return false; } } // Other code to move the player based on keyboard input etc } The idea is if the player will collide with a block in the next game update (the game uses a main loop running at 60Htz), the function will return false and exit, thus meaning the player won't move. Unfortunately, that only works when the player hits the left side of the block, and I can't work out how to make it so the player stops if it hits any side of the block. I have the properties player.xpos and player.ypos to help here.

    Read the article

  • Oracle OpenWorld & JavaOne + Develop 2010

    - by [email protected]
    ?????? ?????????? ????????? ??????????? ??? ?????????? Oracle OpenWorld 2010 19-23 ???????? 2010 Moscone Center, San Francisco, CA ?? ??????????? Oracle Openworld 2010 ?????? ???????????? ??????????? - Applications, Database, Middleware - ? ?????????????? ??????? Oracle, ????? ??????? ???????????? ??????? ? ??????? ???????? (Server and Storage Systems) ? ????? ??? 50 ???????.    ???????? ????? ????????? ?????????? ????? ?? ??????????? ????? ??????????? ????????? ??????? ????? ?????????? ? ??????????? ????? ???????? ? ?????????? ?? ???????? Oracle ? ?????????? ?? ?????? ? ?????????? 

    Read the article

  • Entiity System with C++

    - by Dono
    I'm working on a game engine using the Entity System and I have some questions. How i see Entity System : Components : A class with attributs, set and get. Sprite Physicbody SpaceShip ... System : A class with a list of components. (Component logic) EntityManager Renderer Input Camera ... Entity : Just a empty class with a list of components. What i've done : Currently, i've got a program who allow me to do that : // Create a new entity/ Entity* entity = game.createEntity(); // Add some components. entity->addComponent( new TransformableComponent() ) ->setPosition( 15, 50 ) ->setRotation( 90 ) ->addComponent( new PhysicComponent() ) ->setMass( 70 ) ->addComponent( new SpriteComponent() ) ->setTexture( "name.png" ) ->addToSystem( new RendererSystem() ); My questions Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ? Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ?

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Xlib: extension "GLX" missing on display ":0". Asus k53s

    - by Steve
    I had everything working fine, I use a number of openGL graphics software, example pymol, mgltools, vmd, ballview, rasmol etc. All of these give the error: Xlib: extension "GLX" missing on display ":0". and fail to initialize. I have an i7 asus k53s with nvidia gforce I need these to do my work. I tried the umblebee fix, and just removing all nvidia drivers, or rolling back. I do not know why these were working then stopped, but did notice a new nvidia console in the mu which I assume was from enabling the automatic nvidia feeds, etc? I also played with the xorg, however have no clue what settings are valid. In addition, the display is 50% of the time not recognized now? It just gives a geeic 640x480 and I have to login and out 10 times to get it to return to a normal setting. When I try and set it manual, there is no other setting allowed from the settings menus, and the terminal changes just get re set every time I log out?

    Read the article

  • How to Enable Priority Inbox on Android (and Setup Important-Only Notifications)

    - by The Geek
    Yesterday Google released an updated Gmail application for Android 2.2 phones that supports the Priority Inbox feature—and more importantly, allows you to change your notifications to only alert you for important email. Let’s take a look. Note: If you’ve never used Priority Inbox, you should really give it a try—it rearranges your email into what is and isn’t important, and you can customize how it classifies messages easily. The idea is that it learns over time, so if you send a lot of emails back and forth with somebody, it will know that they are probably important—you can manually adjust the settings as well. To update the Gmail application, you’ll want to head into the Market and access Menu –> Downloads, where you should see Gmail in the list, and it should let you update from there. If you don’t see an update, you’re either not running Android 2.2, or it has already updated automatically Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • ISA Proxy server

    - by user59931
    Have a proxy at work that runs Microsoft ISA. i used to be able to connect using 11.10 with firefox no problem at all. i could either put the settings in firefox or the settings in ubuntu network proxy settings. this would give me a connection no problem ( slow due to the work network being really lame) since i upgraded with 12.10 firefox just crashes if i have any proxy settings (manual added the proxy settings). if i connect to a diffrent network without the proxy settings it works fine and doesn't crash i tried chrome to see if that would work... simular problem. chrome doesn't crash but is so slow it just times out all the time and can take 10min for a page to load.... not really sure where to go with this? i have tried a clean install of 12.04 on 2 diffrent computers and also both tried just upgrading from 11.10. Only answer i can see at the moment is role back to 11.10 :( i have tried all sorts like turning of IPv6 to see if that would make any diffrence but no joy... really am lost now. whats weird is the repositys are also really really slow through 12.04. 50 megs took an hour to download (ISA server has Ubuntu rep servers enables without authenication). really am lost

    Read the article

  • how to setup duel monitors an xorg.conf

    - by MrMonty
    # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" # Removed Option "Xinerama" "1" # Removed Option "Xinerama" "0" # Removed Option "Xinerama" "1" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor1" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "DFP-1: 1280x1024 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "TwinView" "0" # Removed Option "metamodes" "DFP-0: 1280x1024 +0+0" # Removed Option "TwinView" "1" # Removed Option "metamodes" "DFP-0: 1280x1024 +0+0, DFP-1: 1280x1024 +1280+0" # Removed Option "TwinView" "0" # Removed Option "metamodes" "DFP-0: 1280x1024 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: 1280x1024 +0+0, DFP-1: 1280x1024 +1280+0; DFP-1: 1280x1024_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection thats my file!

    Read the article

  • Adding delay between damage

    - by iQue
    I have a bunch of enemies chasing my main-character, and if they intersect I want them to damage him and that's all good. The problem is that right now they damage him as long as they stand around him, every frame! and since it gets called every frame my character's HP reaches 0 almost instantly. I've tried adding delay and I've tried a timertask, but can't get it to work. This is the code I use to check for intersection: private void checkCollision(Canvas canvas) { synchronized (getHolder()) { Rect h1 = happy.getBounds(); for (int i = 0; i < enemies.size(); i++) { for (int j = 0; j < bullets.size(); j++) { Rect b1 = bullets.get(j).getBounds(); Rect e1 = enemies.get(i).getBounds(); if (b1.intersect(e1)) { enemies.get(i).damageHP(5); bullets.remove(j); } if(e1.intersect(h1)){ happy.damageHP(5); // this is the statement that needs some sort of delay, I want them to damage him every 2 seconds they intersect him. } if(enemies.get(i).getHP() <= 0){ enemies.get(i).death(canvas, enemies); score.incScore(5); break; } if(happy.getHP() <= 0){ score.incScore(-50); //end-screen } } } } } If anyone knows the logic to do this please do tell.

    Read the article

  • Can't find new.h - getting gcc-4.2 on Quantal?

    - by Suyo
    I've been trying to compile the Valve Source SDK (2007) on my machine, but I keep running into the same error: In file included from ../public/tier1/interface.h:50:0, from ../utils/serverplugin_sample/serverplugin_empty.cpp:13: ../public/tier0/platform.h:46:17: new.h: No such file or directory I'm pretty new to C++ coding and compiling, but using apt-file search I tried to use every single suggestion for the required files in the Makefile (libstdc++.a and libgcc_eh.a), and none worked. I then found a note in the Makefile saying gcc 4.2.2 is recommended - I assume the older code won't work with the newer version, but gcc-4.2 is unavailable in 12.10. So my question/s is/are: If my assumption is right - how do I get gcc 4.2.2 on Quantal? If my assumption is wrong - what else could be the problem here? Relevant portion of the Makefile: # compiler options (gcc 3.4.1 will work - 4.2.2 recommended) CC=/usr/bin/gcc CPLUS=/usr/bin/g++ CLINK=/usr/bin/gcc CPP_LIB="/usr/lib/gcc/x86_64-w64-mingw32/4.6/libstdc++.a /usr/lib/gcc/x86_64-w64-mingw32/4.6/libgcc_eh.a" # GCC 4.2.2 optimization flags, if you're using anything below, don't use these! OPTFLAGS=-O1 -fomit-frame-pointer -ffast-math -fforce-addr -funroll-loops -fthread-jumps -fcrossjumping -foptimize-sibling-calls -fcse-follow-jumps -fcse-skip-blocks -fgcse -fgcse-lm -fexpensive-optimizations -frerun-cse-after-loop -fcaller-saves -fpeephole2 -fschedule-insns2 -fsched-interblock -fsched-spec -fregmove -fstrict-overflow -fdelete-null-pointer-checks -freorder-blocks -freorder-functions -falign-functions -falign-jumps -falign-loops -falign-labels -ftree-vrp -ftree-pre -finline-functions -funswitch-loops -fgcse-after-reload #OPTFLAGS= # put any compiler flags you want passed here USER_CFLAGS=-m32

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >