Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 607/1081 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Simple C: atof giving wrong value [migrated]

    - by Doc
    I have a program that reads input from a singe line(string obviously) and organizes it into arrays. The problem I have is that at one point the program reads two different values and returns the first one twice. Initially I thought the program was reading the same value twice but when I tested it turned out that it got the correct one but is inputting the wrong one. for example Input: 2 0.90 0.75 0.7 0.65 sorry to snip it (while(fgets (string[test], sizeof(string[test]),ifp)) pch = strtok_r(NULL, " ", &prog); tem3 = atoi(pch); while (loop<tem3) { pch=strtok_r(NULL," ",&prog); venseatfloat[test][loop][DISCOUNT][OCCUPIED]=(float)atof(pch); printf("%f is discount\t",venseatfloat[test][loop][DISCOUNT][OCCUPIED]); pch=strtok_r(NULL, " ", &prog); strcpy(temp, pch); venseatfloat[test][loop][REGULAR][OCCUPIED]=(float)atof(pch); printf("%s is the string but %.3f is regular\n", temp ,venseatfloat[test][loop][DISCOUNT][OCCUPIED]); loop++; } output: >0.900000 is discount 0.75 is the string but 0.900 is regular >0.700000 is discount 0.65 is the string but 0.700 is regular What is going on?

    Read the article

  • Remote access and local access same hostname

    - by cpf
    Hi serverfault, I have a server in a clients network, seperated from theirs with a router/firewall, the intention is to have this server available through one hostname (example.com) My idea is to have (at least) a DNS server in the outside, to have outside (out of the clients' network) access the internal server. The problem would at that point be the internal client (PC A) My question: What would I have to do to make something like this work? Is it even possible or already done? The goal is to not have to change anything on either PC A or PC B, while both should access the same "internal server" while surfing to "example.com" Perhaps adding logic to the DNS server would work (Detect the external IP of internal client [PC A] is the same as the IP for example.com - Give the local IP as reply?) Anyhow: Thanks for helping me think on this!

    Read the article

  • Server IP must be a LAN IP (Port Forwarding Netgear)

    - by rphello101
    I'm trying to set up a server (Apache) on my computer (fairly new to it). As I understand it, for it to be accessible to other computers, I need to forward port 80. When I try to forward the port though, I get the error: Server IP must be a LAN IP. I noticed in ipconfig that my default gateway is different than my wireless router. My computer is not hardwired, not on WiFi. Furthermore, I do not, at this point, have a static IP. I read that it should still work with a dynamic IP until it changes. Any ideas on what I can do?

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem seeing G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd.ref We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • When adding second processor to SQL Server, will it automatically balance the load?

    - by ddavis
    We have a SQL Server 2008 R2 (10.5) on a dedicated box with a single 2.4Ghz processor, which regularly runs at 70-80% CPU. We are going to be adding a significant number of users to the application and therefore want to add a second processor to the box (scale up). Will SQL Server automatically use the second processor to balance threads, or is there additional configuration that will need to be done? In other words, will adding the second processor drop my CPU usage to 35-40% per CPU, automatically balancing the load? Based on what I read here, it seems that it will: http://msdn.microsoft.com/en-us/library/ms181007.aspx However, I've read elsewhere that CPU performance gains can be made by assigning database tables to different filegroups, but I'm not sure we want to get that complicated at this point.

    Read the article

  • Algorithm for grouping friends at the cinema [closed]

    - by Tim Skauge
    I got a brain teaser for you - it's not as simple as it sounds so please read and try to solve the issue. Before you ask if it's homework - it's not! I just wish to see if there's an elegant way of solving this. Here's the issue: X-number of friends want's to go to the cinema and wish to be seated in the best available groups. Best case is that everyone sits together and worst case is that everyone sits alone. Fewer groups are preferred over more groups. Sitting alone is least preferred. Input is the number of people going to the cinema and output should be an array of integer arrays that contains: Ordered combinations (most preferred are first) Number of people in each group Below are some examples of number of people going to the cinema and a list of preferred combinations these people can be seated: 1 person: 1 2 persons: 2, 1+1 3 persons: 3, 2+1, 1+1+1 4 persons: 4, 2+2, 3+1, 2+1+1, 1+1+1+1 5 persons: 5, 3+2, 4+1, 2+2+1, 3+1+1, 2+1+1+1, 1+1+1+1+1 6 persons: 6, 3+3, 4+2, 2+2+2, 5+1, 3+2+1, 2+2+1+1, 2+1+1+1+1, 1+1+1+1+1+1 Example with more than 7 persons explodes in combinations but I think you get the point by now. Question is: What does an algorithm look like that solves this problem? My language by choice is C# so if you could give an answer in C# it would be fantastic!

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • Upgrading php, mysql, and apache

    - by Kevin
    I have been looking around and have not found a good answer to my question. I currently have php 5.3.3 installed via yum on my centos 6.3 server. I need to upgrade to php 5.10 or later. It is my understanding that you need to find the correct mysql and apache packages that fit with the php install. Can someone please point me in the direction of a an update guide? Btw I am not looking for "yum update httpd php5" this gets me the old 5.3.3 version. Thanks, Kevin.

    Read the article

  • Game Asset Management

    - by user964123
    I am making my first small mobile game in C# XNA. Lets say I have 3 screens, the main menu, options and game screen. A single game session usually lasts for 1 min, so the user will alternate frequently between the main menu and game screen. Therefore, once I load the textures for either screen, I want to keep them in memory to avoid frequent reloading. Both screens share some assets like their background textures, but differ in others. The first solution I came up with is making 2 texture factory classes, MainScreenAssetFactory and GameScreenAssetFactory, each with their own content manager, and ill store them in a globally accessible point so that they persist after either screen is destroyed. There is also a OptionsScreenAssetFactory, but that I dont want to cache it since the options screen is rarely visited. A typical Factory would look something like this public class MainScreenAssetFactory { private readonly ContentManager contentManager; public MainScreenAssetFactory(IServiceProvider serviceProvider, string rootDirectory) { contentManager = new ContentManager(serviceProvider) { RootDirectory = rootDirectory }; } public Texture2D ListElementBackground { get { return return contentManager.Load<Texture2D>("UserTab"); } } public Texture2D ListElementBulletPoint { get { return return contentManager.Load<Texture2D>("TabIcon"); } } public Texture2D LoggedOutUser { get { return return contentManager.Load<Texture2D>("LoggedOutUser"); } } } Since both Main, Options and Game Screen share some common resources, instead of loading them more than once, I created another class CommonAssetTexFactory which holds the common stuff and stays in-memory during the app lifetime. For example, this class gets passed to the options screen when it is created. However, given my small game with its few assets, I am already finding this solution cumbersome and inflexible. Changing anything would require looking to see if its already in the common factory, and if not, modifying existing factories and so on. And this is just considering textures currently, i didnt add sound files yet. I cant imagine bigger games with thousands of resources using this approach. A better idea must exist. Would someone please enlighten me?

    Read the article

  • ubuntu 12.04 installer does not recognize drive partitions

    - by endless forms
    I recently purchased a new HP Pavilion HPE desktop running Windows 7. I am trying to install a dual-boot system with 12.04. However, when I run the LiveCD I only get as far as the "Install" window where you can select the partitions for your drives. On the bottom where it says "device for boot loader installation" I have "/dev/sda" and cannot select any other devices. All the options to change the drives are greyed out, most likely because there are no drives in the window. I partitioned my largest drive using the tools within Windows, then booted into the CD, but nothing shows up. I then used Gparted to change the new space from unallocated to an /ext2, and still nothing shows up. The installer does not recognize anything, but when I go into an Ubuntu session and use the disk utility manager I can see the partitions I made. Anything I do has to be done outside of the installer. I have no files on this new computer, so this is the perfect time to install a parallel OS. I would like avoid completely reinstalling Windows, however. I've been over the forums many times, but all the answers I've found have not worked for me. I also tried flagging the new, empty partition as boot, but that screwed Windows up. Also, the WUBI installer hits the same point and quits. I know that the disk itself is fine because I just made another dual boot system on a Gateway PC. This makes me think something within this computer is preventing the installer from "seeing" the drives. Any help would be much appreciated! Edit in response: The main part of the partitioning window shows no partitions, everything is blank. There is no way to add partitions, and all the buttons are useless. I've tried defragging my drive multiple times, and I also used the same disk to dual-boot another PC with no problems, so it's not the disk, it's definitely the computer.

    Read the article

  • IIS6 can't find site on local network

    - by chezy525
    I have a windows 2003 server with dual NICs running IIS6. I can access everything remotely, but the internal network can't seem to find the site regardless of which IP Address I try to go to. There are really several weird things that are happening here, but I'm going to limit this question to what I'm guessing to be the simplest problem (the solution to which I'm hoping solves other things as well): From the server itself, I can access the webpage using the primary IP address (i.e. http://192.168.1.2/index.htm), but not using the secondary IP address (i.e. http://10.10.10.2/index.htm). Self pinging both IP addresses works, and the "Web site identification" in IIS has the IP address set to "(All Unassigned)"... which I believe should bind both IP addresses to this site. I apologize if I'm not providing enough details about my setup, but at this point I don't even know what's relevant...

    Read the article

  • Is the addition of a duration to a date-time defined in ISO 8601?

    - by Benjamin
    I've writing a date-time library, and need to implement the addition of a duration to a date-time. If I add a 1 month duration: P1M to the 31st March 2012: 2012-03-31, does the standard define what the result is? Because the resulting date (31st April) does not exist, there are at least two options: Fall back to the last day of the resulting month. This is the approach currently taken by the ThreeTen API, the (alpha) reference implementation of JSR-310: ZonedDateTime date = ZonedDateTime.parse("2012-03-31T00:00:00Z"); Period duration = Period.parse("P1M"); System.out.println(date.plus(duration).toString()); // 2012-04-30T00:00Z Carry the extra day to the next month. This is the approach taken by the DateTime class in PHP: $date = new DateTime('2012-03-31T00:00:00Z'); $duration = new DateInterval('P1M'); echo $date->add($duration)->format('c'); // 2012-05-01T00:00:00+00:00 I'm surprised that two date-time libraries contradict on this point, so I'm wondering whether the standard defines the result of this operation?

    Read the article

  • Ubuntu for Phones / Touch vs Android, IOS and BlackBerry OS

    - by Ome Noes
    Currently I have a LG Google Nexus 4 with lots of issues because of the latest android 4.3 update. Since the update my battery drains within 7 hours when in it's standby / idle and even faster when I use it normaly! Before the Nexus 4 I had an Iphone but got sick of IOS because for me it's to much of a closed operating system and I dislike having to work with either Windows or Itunes. At this point neither Google or LG is willing to provide me (and all the others that have similar Nexus 4 problems) with a solution or even a reaction... Also i'm not very fond of the idea that the NSA (and maybe others) can and is currently monitoring millions of Android, IOS and BlackBerry OS devices all over the world. Since i've been using Ubuntu now very happily for almost 5 years I see Ubuntu for Phones / Touch as the only remedy for all this BS. Please be so kind to let me know when you will have a fully functioning version of your Ubuntu for Phones / Touch ready for consumer use. I'm realy sad that the Ubuntu Edge campaign didn't work out and hope to see lots and lots of future smartphones outfitted with Ubuntu a.s.a.p.! Keep up the good work!

    Read the article

  • Service Catalogs for Database as a Service

    - by B R Clouse
    At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended which would have surprised me several months ago when I started researching this topic.  At that time, I thought of service catalogs as something trivial which could be explained in a few simple slides.  But while looking at all the different options and approaches available, I came to learn that designing a succinct and effective catalog is not a trivial task, and mistakes can lead to confusion and unintended side effects.  And when the room filled up, my new point of view was confirmed. In case you missed the session, or were able to attend but would like more details, I've posted a white paper that covers the topics from the session, and more.  We start with an overview of the components of a service catalog: And then look at several customer case studies of service catalogs for DBaaS.  Synthesizing those examples, we summarize the main options for defining the service categories and their levels.  We end with a template for defining Bronze | Silver | Gold service tiers for Oracle Database Services. The paper is now available here - watch for updates as we work to expand some sections and incorporate readers' feedback (hint - that includes your feedback). Visit our OTN page for additional Database Cloud collateral.

    Read the article

  • Creating movement path displays in a top-down 2d RTS

    - by nihohit
    My game is a top-down 2d RTS coded in C# using SFML's libraries. I want that during unit selection, a unit will display it's movement path on the map. Currently, after the path is computed as a list of directions ({left, up,down, down, down, left}, as an example), it's sent to the graphical component to create it's UI equivalent, and here I'm having some problems. current, these I've checked three ways to do it: compute the size of the image (in the example above it'll be a 3*2 rectangle) and create an invisible rectangle, and then go over the directions list and mark each spot with a visible point, so as to get a continous line. This system is slightly problematic because of the amount of large images that I need to save, but mostly because I have a lot of fine detail onscreen, and a continous line obstructs the view. again, compute the size of the image, but now create several (let's say 4) invisible images of that size, and then instead of a single continous line I'll switch between the four images, in each will appear only a fourth of the spots, in a way which creates a path animation. This is nicer on the eye, but here the memory demands, and the amount of time needed to compute each such image-loop is significant. Just create a list of single markers, each on a different spot on the path. This is very quick & easy on memory, but too sparse. Is there a simple or resource-light system to create path-animations?

    Read the article

  • Using the same local folder for Dropbox and Skydrive

    - by roryok
    Ever since I switched to windows phone I've sorely missed having an official dropbox app. Recently I've toyed with the idea of moving all my crucial files to SkyDrive instead. I have more storage on SkyDrive and the WP SkyDrive integration is very handy. I'm thinking about having both cloud sync services point to the same local folder for the first few weeks. That way if I want to go back its an easy task (and I can keep using dropbox's superior public folder) Has anyone else done this? Are there any potential issues (permissions, conflicts etc)

    Read the article

  • What is a good way to refactor a large, terribly written code base by myself? [closed]

    - by AgentKC
    Possible Duplicate: Techniques to re-factor garbage and maintain sanity? I have a fairly large PHP code base that I have been writing for the past 3 years. The problem is, I wrote this code when I was a terrible programmer and now it's tens of thousands of lines of conditionals and random MySQL queries everywhere. As you can imagine, there are a ton of bugs and they are extremely hard to find and fix. So I would like a good method to refactor this code so that it is much more manageable. The source code is quite bad; I did not even use classes or functions when I originally wrote it. At this point, I am considering rewriting the whole thing. I am the only developer and my time is pretty limited. I would like to get this done as quickly as possible, so I can get back to writing new features. Since rewriting the code would take a long time, I am looking for some methods that I can use to clean up the code as quickly as possible without leaving more bad architecture that will come back to haunt me later. So this is the basic question: What is a good way for a single developer to take a fairly large code base that has no architecture and refactor it into something with reasonable architecture that is not a nightmare to maintain and expand?

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Can the JVM(Oracle) run into an OutOfMemory error if the heap size is below the max?

    - by user439407
    I am running a Tomcat site(with an NGinx front end) that seems to be randomly running out of memory even though the max heap size is pretty large. My question is is it possible for the JVM to get an OutOfMemory error even if the heap size is significantly less than -Xmx? For instance, here is a snapshot I took just 15 seconds before an OutOfMemory error: Tue Dec 18 23:13:28 JST 2012 Free memory: 162.31 MB Total memory: 727.75 MB Max memory: 3808.00 MB I guess theoretically it's possible that my code generated 3 gigs worth of objects in 15 seconds, but I highly doubt it. It seems like the JVM was unable to grow the heap even though it theoretically had room....Is it possible that other processes started using memory to the point that the JVM could not grow? I am running 64-bit Oracle Hotspot on a 64 bit vm running CentOS 5 with 6 gigs of ram.

    Read the article

  • How to manage SOAP requests to a pool of VM each listening on a HTTP port with a priority value in these requests?

    - by sputnick
    I have a front SOAP web-server under Linux. It will have to communicate with Windows Servers VM listening each on a HTTP port, for a HTTP POST request. The chosen VM should return a report of the task to the SOAP client. In the SOAP requests, there's a special variable : the priority of the request (kind of SLA), and my question is coming right now : I think of using a ha software (nginx, HAProxy, HeartBeat...) that can manage priority in this point of view. Is it relevant or do you think I need to implement a queue by myself with some specific developments? Ex: I have a SOAP requests with low priority in the pipe : the weight priority for these VM should be decreased if I have high priority SOAP requests at the same time. Any clue will be really appreciated.

    Read the article

  • How to deal with bad developers who hold back the project

    - by ILovePaperTowels
    We're at the end of a project, but we continue to run into issues because of a single piece of the project. This piece is handled by a specific developer. Finally, we grabbed latest and started reviewing it. It's just horrendous code! Trying to step through it was difficult and it's a relatively simple workflow. The point of this question is, how to deal with this situation. The developer has a hard time accepting criticism (constructive or otherwise) and feels he is more knowledgeable than others on the team who are, well, highly decorated, experienced and accomplished developers. It's difficult to even get into a topic about development because it turns into "I know what I'm talking about and you're just wrong!" type of conversation. A request has already been put in to replace this developer but it is a hard sell since devs are in short supply where we are and this is a corporation with a LOT of political bs. Management has been notified a few times but nothing is happening.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

  • Parallels: How to see a Mac-hosted website from Windows?

    - by Jim Miller
    I'm traveling at the moment, and have moved one of the websites I'm working on to my MBP so I can work on it without a network connection. I've made an addition to the Mac's /etc/hosts file pointing the domain name to 127.0.0.1, and all's well. I now want to get into Parallels and check the site from Windows browsers. How do I get things so that the Windows browser will understand the domain name and access the site? The Windows image obviously doesn't recognize / can't find the Mac's /etc/hosts file, and references to 127.0.0.1 in the Windows hosts file just as obviously point to Windows, not the Mac. Any advice out there? Thanks!

    Read the article

  • Object desing problem for simple school application

    - by Aragornx
    I want to create simple school application that provides grades,notes,presence,etc. for students,teachers and parents. I'm trying to design objects for this problem and I'm little bit confused - because I'm not very experienced in class designing. Some of my present objects are : class PersonalData() { private String name; private String surename; private Calendar dateOfBirth; [...] } class Person { private PersonalData personalData; } class User extends Person { private String login; private char[] password; } class Student extends Person { private ArrayList<Counselor> counselors = new ArrayList<>(); } class Counselor extends Person { private ArrayList<Student> children = new ArrayList<>(); } class Teacher extends Person { private ArrayList<ChoolClass> schoolClasses = new ArrayList<>(); private ArrayList<Subject> subjects = new ArrayList<>(); } This is of course a general idea. But I'm sure it's not the best way. For example I want that one person could be a Teacher and also a Parent(Counselor) and present approach makes me to have two Person objects. I want that user after successful logging in get all roles that it has (Student or Teacher or (Teacher & Parent) ). I think I should make and use some interfaces but I'm not sure how to do this right. Maybe like this: interface Role { } interface TeacherRole implements Role { void addGrade( Student student, Grade grade, [...] ); } class Teacher implements TeacherRole { private Person person; [...] } class User extends Person{ ArrayList<Role> roles = new ArrayList<>(); } Please if anyone could help me to make this right or maybe just point me to some literature/article that covers practical objects design.

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >