Search Results

Search found 80218 results on 3209 pages for 'client side data'.

Page 65/3209 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Solutions for software using many calls to a server

    - by Val
    I am developing software that uses many calls to a server. On a client side it's a Silverlight application. Almost every time a user clicks on a button in it, it sends 1-5 WCF calls to a server. There can be up to dozen or so users at a time. The server is a database server that serves data to a client. I am an intermediate level developer and am thinking about caching some data and syncing my changes from time to time. Are there any official solutions or technologies for it, like, patterns and such?

    Read the article

  • apt-cacher ng / upgrade fails from client

    - by todayis23
    I'm running apt-cacher ng on an Ubuntu Hardy server and try to upgrade the packages on a Natty client (which was initially a Maverick). I didn't do anything on the server. On the client I tried two setups. I configure APT to use a http-proxy. On the client I did a "apt-get update" which worked fine, but very slowly. In the acng-report.html I see an entry, which seems to be correct. After verifying Install these packages without verification [y/N]? y "apt-get upgrade" failes with the message: Err http://archive.ubuntu.com/ubuntu/ natty-updates/main libnux-0.9-common all 0.9.48-0ubuntu1.1 503 Name or service not known The GUI update manager fails as well with the message, that untrusted packages will be installed. I edit sources.list and add the server in the correct format to all sources. "apt-get update" is very slowly... and I get a lot of errors like this: W: Failed to fetch http://[::ffff:10.10.10.10]:3142/archive.ubuntu.com/ubuntu/dists/natty/main/binary-i386/Packages 403 Forbidden file type or location After that "apt-get upgrade" says: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. What could be wrong? Is it possible to use apt-cacher ng on an older system for upgrading newer systems? Thank you in advance!

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • Optimising Server-Side Paging - Part II

    The second part of this series compares four methods of obtaining the total number of rows in a paged data set. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • Oracle Virtual Desktop Client with USB smart card reader

    - by wim.coekaerts
    I have my Sun Ray thin client at home which I use religiously, I use a Sun Ray 3i at work as my main desktop and just always take my smart card home and happily continue with the hot desking feature. We released a software version of the Sun Ray client called Oracle Virtual Desktop Client (OVDC). There is a version for Windows, Linux and Mac OS X. I have a minimac at home and I installed OVDC on it, which of course works great but since I like to re-connect to my session that I use at work, I wanted to try out the external usb smart card reader feature. I ordered a cute, low cost device online and tried it out. As expected, it worked out of the box without -any- configuration. I took the device, plugged it into my minimac, started OVDC, plugged in my smartcard and I got the password screen (screensaver) to get into my sun ray session on my server at work. Nothing new here, this is a feature that's been in the product but I had never tried it before and it works out of the box and is super easy and I just felt like sharing :-) Here are a few pictures : (1) login screen (2) smart cardreader without card (3) password screen (4) smart card reader with card

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • Charging by the hour/project

    - by thesam18888
    This is related to a question I asked earlier - How to end a relationship with a client without pissing them off? What are your obligations when charging by the hour vs charging by project? If you agree to take on a project, give a rough estimate that it might take 10 days for you to work on and charge £X per hour - are you obligated to work for free after those 10 days are up and you have still not managed to complete your project due to unanticipated issues? What if you have delivered the project but bugs are found - should you fix these bugs for free if the 10 days are up or should you charge your client? Also, for the above project, what should be the result when you start on the project, but after the 10 days for whatever reason you have to give up and tell your client that you cannot do it anymore? I realise that this does nothing to build your reputation and relationship with the client but are you obligated to pay back the money paid to you or do you just deliver the half/nearly completed source code and help them find someone else to complete it? The reason I am asking the above questions is because I am very new to freelancing and would like to know how to deal with the above situations if they ever crop up. Thanks!

    Read the article

  • Turn Your Desktop to the ‘Dark Side’ with the Moonlight Theme for Windows 7

    - by Asian Angel
    Do you love the peaceful, calming look moonlit scenery? Then you will definitely want to download the Moonlight Theme for Windows 7. This awesome theme comes with sixteen wallpapers full of moonlit goodness that will have your desktop howling at the nighttime skies. Download the Moonlight Theme [Windows 7 Personalization Gallery] How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Why the static data members have to be defined outside the class separately in C++ (unlike Java)?

    - by iammilind
    class A { static int foo () {} // ok static int x; // <--- needed to be defined separately in .cpp file }; I don't see a need of having A::x defined separately in a .cpp file (or same file for templates). Why can't be A::x declared and defined at the same time? Has it been forbidden for historical reasons? My main question is, will it affect any functionality if static data members were declared/defined at the same time (same as Java) ?

    Read the article

  • The Back Side of Exceptions

    This article illustrates the risks of exceptions being thrown where it is generally not expected, e.g., a library function or a finally block, and shows ways to prevent some insidious errors (such as inconsistency of program data or loss of exception information).

    Read the article

  • How can I force the server socket to re-accept a request from a client?

    - by Roman
    For those who does not want to read a long question here is a short version: A server has an opened socket for a client. The server gets a request to open a socket from the same client-IP and client-port. I want to fore the server not to refuse such a request but to close the old socket and open a new one. How can I do ti? And here is a long (original) question: I have the following situation. There is an established connection between a server and client. Then an external software (Bonjour) says to my client the it does not see the server in the local network. Well, client does nothing about that because of the following reasons: If Bonjour does not see the server it does not necessarily means that client cannot see the server. Even if the client trusts the Bonjour and close the socket it does not improve the situation ("to have no open socket" is worser that "to have a potentially bad socket"). So, client do nothing if server becomes invisible to Bonjour. But than the server re-appears in the Bonjour and Bonjour notify the client about that. In this situation the following situations are possible: The server reappears on a new IP address. So, the client needs to open a new socket to be able to communicate with the server. The server reappears on the old IP address. In this case we have two subcases: 2.1. The server was restarted (switched off and then switched on). So, it does not remember the old socket (which is still used by the client). So, client needs to close the old socket and open a new one (on the same server-IP address and the same server-port). 2.2. We had a temporal network problem and the server was running the whole time. So, the old socket is still available for the use. In this case the client does not really need to close the old socket and reopen a new one. But to simplify my life I decide to close and reopen the socket on the client side in any case (in spite on the fact that it is not really needed in the last described situation). But I can have problems with that solution. If I close the socket on the client side and than try to reopen a socket from the same client-IP and client-port, server will not accept the call for a new socket. The server will think that such a socket already exists. Can I write the server in such a way, that it does not refuse such calls. For example, if it (the server) sees that a client send a request for a socket from the same client-IP and client-port, it (server) close the available socket, associated with this client-IP and client-port and than it reopens a new socket.

    Read the article

  • Propel-load-data is causing an error

    - by Jon Winstanley
    I am trying to load fixtures but myproject is erroring at the CLI and starting the indexer process. I have tried: Rebuilding the schema and model Emptying the database and starting again Clearing the cache Validating the YML file and trying much simpler data-dumps My platform is Symfony 1.0 on Windows Some also seems to have had the same issue in the past. C:\web\my_project>symfony propel-load-data backend >> propel load data from "C:\web\my_project\data\fixtures" PHP Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 PHP Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77

    Read the article

  • How does LinqPad support WCF Data Services?

    - by user341127
    LinqPad supports WCF Data Services. If you assign an URL, such as http://services.odata.org/Northwind/Northwind.svc/. It will list all available data objects and you can query them. I guess LinqPad generates all available data classes at run time by reflection.Emit. I am wondering who can show me to how to do so. Or maybe someone has done it before. Any feedback are appreciated. Ying

    Read the article

  • Test data generators / quickest route to generating solid, non-repetitive, but not-real database sam

    - by Jamo
    I need to build a quick feasibility test / proof-of-concept of a remote database for a client, that will be populated with mostly-typical Company and People data (names, addresses, etc); 150K records or so. The sample databases mentioned here were helpful: http://stackoverflow.com/questions/57068/good-databases-with-sample-data ...but, I'd like to be able to generate sample data like this easily on less-typical datasets as well. Anyone have any recommendations for off-the-shelf (or off-the-web) solutions?

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • improve my code for collapsing a list of data.frames

    - by romunov
    Dear StackOverFlowers (flowers in short), I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far. This is the data.frame that needs to be collapsed/stacked. > walk.sample [[1]] walker x y 1073 3 228.8756 -726.9198 1086 3 226.7393 -722.5561 1081 3 219.8005 -728.3990 1089 3 225.2239 -727.7422 1032 3 233.1753 -731.5526 [[2]] walker x y 1008 3 205.9104 -775.7488 1022 3 208.3638 -723.8616 1072 3 233.8807 -718.0974 1064 3 217.0028 -689.7917 1026 3 234.1824 -723.7423 [[3]] [1] 3 [[4]] walker x y 546 2 629.9041 831.0852 524 2 627.8698 873.3774 578 2 572.3312 838.7587 513 2 633.0598 871.7559 538 2 636.3088 836.6325 1079 3 206.3683 -729.6257 1095 3 239.9884 -748.2637 1005 3 197.2960 -780.4704 1045 3 245.1900 -694.3566 1026 3 234.1824 -723.7423 I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame. collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist walk.df <- data.frame() for (i in 1:length(x)) { n.rows <- nrow(x[[i]]) if (length(x[[i]])>1) { temp.df <- cbind(x[[i]], rep(i, n.rows)) names(temp.df) <- c("walker", "x", "y", "session") walk.df <- rbind(walk.df, temp.df) } else { cat("Empty list", "\n") } } return(walk.df) } > collapseToDataFrame(walk.sample) Empty list Empty list walker x y session 3 1 -604.5055 -123.18759 1 60 1 -562.0078 -61.24912 1 84 1 -594.4661 -57.20730 1 9 1 -604.2893 -110.09168 1 43 1 -632.2491 -54.52548 1 1028 3 240.3905 -724.67284 1 1040 3 232.5545 -681.61225 1 1073 3 228.8756 -726.91980 1 1091 3 209.0373 -740.96173 1 1036 3 248.7123 -694.47380 1 I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?

    Read the article

  • VBA-Sorting the data in a listbox, sort works but data in listbox not changed

    - by Mike Clemens
    A listbox is passed, the data placed in an array, the array is sort and then the data is placed back in the listbox. The part that does work is putting the data back in the listbox. Its like the listbox is being passed by value instead of by ref. Here's the sub that does the sort and the line of code that calls the sort sub. Private Sub SortListBox(ByRef LB As MSForms.ListBox) Dim First As Integer Dim Last As Integer Dim NumItems As Integer Dim i As Integer Dim j As Integer Dim Temp As String Dim TempArray() As Variant ReDim TempArray(LB.ListCount) First = LBound(TempArray) ' this works correctly Last = UBound(TempArray) - 1 ' this works correctly For i = First To Last TempArray(i) = LB.List(i) ' this works correctly Next i For i = First To Last For j = i + 1 To Last If TempArray(i) > TempArray(j) Then Temp = TempArray(j) TempArray(j) = TempArray(i) TempArray(i) = Temp End If Next j Next i ! data is now sorted LB.Clear ! this doesn't clear the items in the listbox For i = First To Last LB.AddItem TempArray(i) ! this doesn't work either Next i End Sub Private Sub InitializeForm() ' There's code here to put data in the list box Call SortListBox(FieldSelect.CompleteList) End Sub Thanks for your help.

    Read the article

  • Problem with core data migration mapping model

    - by dpratt
    I have an iphone app that uses Core Data to do storage. I have successfully deployed it, and now I'm working on the second version. I've run into a problem with the data model that will require a few very simple data transformations at the time that the persistent store gets upgraded, so I can't just use the default inferred mapping model. My object model is stored in an .xcdatamodeld bundle, with versions 1.0 and 1.1 next to each other. Version 1.1 is set as the active version. Everything works fine when I use the default migration behavior and set NSInferMappingModelAutomaticallyOption to YES. My sqlite storage gets upgraded from the 1.0 version of the model, and everything is good except for, of course, the few transformations I need done. As an additional experimental step, I added a new Mapping Model to the core data model bundle, and have made no changes to what xcode generated. When I run my app (with an older version of the data store), I get the following * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Object's persistent store is not reachable from this NSManagedObjectContext's coordinator' What am I doing wrong? Here's my code for to get the managed object model and the persistent store coordinator. - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (_persistentStoreCoordinator != nil) { return _persistentStoreCoordinator; } _persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"gti_store.sqlite"]]; NSError *error; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { NSLog(@"Eror creating persistent store coodinator - %@", [error localizedDescription]); } return _persistentStoreCoordinator; } - (NSManagedObjectModel *)managedObjectModel { if(_managedObjectModel == nil) { _managedObjectModel = [[NSManagedObjectModel mergedModelFromBundles:nil] retain]; NSDictionary *entities = [_managedObjectModel entitiesByName]; //add a sort descriptor to the 'Foo' fetched property so that it can have an ordering - you can't add these from the graphical core data modeler NSEntityDescription *entity = [entities objectForKey:@"Foo"]; NSFetchedPropertyDescription *fetchedProp = [[entity propertiesByName] objectForKey:@"orderedBar"]; NSSortDescriptor* sortDescriptor = [[[NSSortDescriptor alloc] initWithKey:@"index" ascending:YES] autorelease]; NSArray* sortDescriptors = [NSArray arrayWithObjects:sortDescriptor, nil]; [[fetchedProp fetchRequest] setSortDescriptors:sortDescriptors]; } return _managedObjectModel; }

    Read the article

  • Generic Data Structure Description Language

    - by Jon Purdy
    I am wondering whether there exists any declarative language for arbitrarily describing the format and semantics of a data structure, that can be compiled to a specific implementation of that structure in any of a set of target languages. That is, something like a generic data definition language but geared toward describing arbitrary data structures such as vectors, lists, trees, etc., and the semantics of operations on those structures. I ask because I had an idea for a feasible implementation of this concept, and I'm just wondering whether it's worth it, and, consequently, whether it's been done before. Another, slightly more abstract question: is there any real difference between the normative specification of a data structure (what it does) and its implementation (how it does it)?

    Read the article

  • Visibility of Class field-data of Mouse Clicked ImageButton located within WrapPanel

    - by Bill
    I am attempting to obtain the class-data behind an ImageButton that is mouse-clicked; which ImageButton is located within a WrapPanel filled with ImageButtons. The problem I am having is obtaining the visibility of the field data within the class behind the image-button. Although I can see the class, I can neither see nor access the field data. Can anyone please point me in the right direction? // Handles the ImageButton mouseClick event within the WrapPanel. private void SolarSystem_Click(Object sender, RoutedEventArgs e) { FrameworkElement fe = e.OriginalSource as FrameworkElement; SelectedPlanet PlanetSelected = new SelectedPlanet(fe); PlanetSelected.Owner = this; MessageBox.Show(PlanetSelected.PlanetName); } // Used to initiate instance of Class and some field data. public SelectedPlanet(FrameworkElement fe) { InitializeComponent(); string sPlanetName = ((PlanetClass)(fe)).PlanetName; return sPlanetName } // Class Data public class PlanetClass { string planetName; public PlanetClass(string planetName) { PlanetName = planetName; } public string PlanetName { set { planetName = value; } get { return planetName; } } }

    Read the article

  • How does cobol store and retrieve data?

    - by controlfreak123
    I'm starting to learn about COBOL. I have some experience writing programs that deal with sql databases and I guess I'm confused how cobol stores and retrieves data that is stored in a mainframe for example. I know that it's not like relational databases but every example program I've seen takes data straight from the command line and I know thats not how real world COBOL programs process the data. Can someone explain or show me a good resource that can explain it?

    Read the article

  • Core Data vs. SQLitePersistentObjects

    - by Macatomy
    I'm creating an iPhone app and I'm trying to choose between 2 solutions for a persistent store. Core Data, or SQLitePersistentObjects. Basically, all my app needs is a way to store an array of model objects and then load them again to display in a UITableView. Its nothing too complicated. Core Data seems to have a much higher learning curve than the simple to use SQLitePersistentObjects. Are there any obvious benefits of using Core Data over SQLitePersistentObjects in my case?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >