Search Results

Search found 59692 results on 2388 pages for 'legacy data'.

Page 109/2388 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Best practices for periodically saving game state to disk

    - by Ben Morris
    I'm working on an MMO. All of the player and environment data lives on a server and is kept in memory. There's a "world" object which keeps track of all of the maps, characters, etc. and their relations to each other. To avoid data loss in case of a crash, I've been periodically serializing the world to disk. The trouble is, this object can be quite large, so when the server starts writing, there's noticeable in-game slowdown for a few seconds, which I'd like to avoid. Any pointers on how to go about this in a more efficient way?

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • Texturing a mesh generated from voxel data

    - by Minja
    I have implemented the Marching Cubes algorithm to display an isosurface based on voxel data. Currently, it is displayed with triplanar texturing. I'm working with unity, so I have a material with the triplanar shader attached. Now, the whole isosurface is rendered using this material. And thats my problem: I want the texture to represent the voxel data. I'm storing a material value for every point in the grid, and based on this value, I want the texture of the isosurface to change. Sadly, I have no clue how to do this. So if the voxel is sand, I want sand to be displayed; if it's stone, then there should be stone. Right now, everything is displayed as sand. Thanks in advance!

    Read the article

  • INVITATION: EMEA MASTER DATA MANAGEMENT (MDM) PARTNER SUMMIT, 5th December 2011

    - by mseika
    Oracle is pleased to invite you to the EMEA Master Data Management Partner Summit in Portugal on 5th December 2011. Partners such as you have been a key contributor to growth for Oracle’s MDM. And to empower your further growth; Oracle has formed a dedicated MDM Specialization Program to help you further develop your organization’s readiness in selling and delivering Oracle’s Master Data Management solutions that best suit your go-to-market plans and initiatives. For more information about the MDM Partner Summit including the detailed agenda please click here (Login required). Register Now The MDM Partner Summit will be followed by a 4 day MDM Partner Hands On training starting from Dec 6th to 9th with an arrival on Dec 5th. Please feel free to register your company sales and technical employees. See here for more details such as training agenda and registration.

    Read the article

  • Invitation: EMEA Master Data Management (MDM) Partner Summit, 5th December 2011

    - by swalker
    Oracle is pleased to invite you to the EMEA Master Data Management Partner Summit in Portugal on 5th December 2011. Partners such as you have been a key contributor to growth for Oracle’s MDM. And to empower your further growth; Oracle has formed a dedicated MDM Specialization Program to help you further develop your organization’s readiness in selling and delivering Oracle’s Master Data Management solutions that best suit your go-to-market plans and initiatives. For more information about the MDM Partner Summit including the detailed agenda please click here (Login required). Register Now The MDM Partner Summit will be followed by a 4 day MDM Partner Hands On training starting from Dec 6th to 9th with an arrival on Dec 5th. Please feel free to register your company sales and technical employees. See here for more details such as training agenda and registration. Click here to see the full invitation.

    Read the article

  • Cloud INaaS from Data Integration companies

    - by llaszews
    Traditional integration IT vendors are also starting to offer INaaS. Infomatica has been the most aggressive integration vendor when it comes to offering INaaS. Informatica has offered INaaS for over five years and continues to add capabilities, has a number of high profile references, and also continues to add out-of-the-box cloud integration with major COTS and SaaS providers. The Informatica Marketplace contains pre-packaged Informatica Cloud end-points and plug-ins. One such MarketPlace solution, is integration with Oracle E-Business Suite using Informatica integration. The Informatica E-Business Suite INaaS offering includes automatic loading and extraction of data between Salesforce CRM and on-premise systems, cloud-to-cloud, flat files, and relational database. The entire Informatica Cloud integration solution runs in an Informatica managed facility (PaaS). When running in a PaaS environment, Informatica offers an option to keep an exact copy of your cloud-based data on-premise for archival, compliance, and enterprise reporting requirements.

    Read the article

  • Entity Framework 4.0 POCO Classes and Data Services

    If you've flipped on the POCO (Plain Ol' CLR Objects) code generation T4 templates for Entity Framework to enable testing or just 'cuz you like the code better, you might find that you lack the ability to expose that same model via Data Services as OData (Open Data). If you surf to the feed, you'll likely see something like this: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Entity Framework 4.0 POCO Classes and Data Services

    If you've flipped on the POCO (Plain Ol' CLR Objects) code generation T4 templates for Entity Framework to enable testing or just 'cuz you like the code better, you might find that you lack the ability to expose that same model via Data Services as OData (Open Data). If you surf to the feed, you'll likely see something like this: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Extracting GPS Data from JPG files

    - by Peter W. DeBetta
    I have been very remiss in posting lately. Unfortunately, much of what I do now involves client work that I cannot post. Fortunately, someone asked me how he could get a formatted list (e.g. tab-delimited) of files with GPS data from those files. He also added the constraint that this could not be a new piece of software (company security) and had to be scriptable. I did some searching around, and found some techniques for extracting GPS data, but was unable to find a complete solution. So, I did...(read more)

    Read the article

  • Oracle : SQL Developer Data Modeler 3.0 disponible, l'outil de modélisation s'ouvre au travail collaboratif

    Oracle : SQL Developer Data Modeler 3.0 disponible L'outil de modélisation s'ouvre au travail collaboratif Oracle vient de lancer une nouvelle version majeure de « SQL Developer Data Modeler », son outil gratuit de modélisation des bases de données. Cette version 3.0 acquiert une dimension collaborative et s'ouvre aux systèmes de contrôle de version. Plusieurs collaborateurs peuvent donc désormais contribuer à l'élaboration du même modèle et suivre, en détail, quel contributeur a fait quels changements sur les modélisations. Pour l'instant, seul Subversion est supporté mais Oracle envisage d'intégrer le support d'autres CVS. Cet outil s'intègr...

    Read the article

  • Select,Insert,Update and Delete data with LINQ to SQL in an ASP.Net application

    - by nikolaosk
    As you might have guessed I am continuing my LINQ to SQL posts. I am teaching a course right now on ADO.Net 3.5 (LINQ & EF) and I know a lot of people who have learned through my blog and my style of writing. I am going to use a step by step example to demonstrate how to select,update,insert,delete data through LINQ to SQL into the database. If you want to have a look on how to return data from a database with LINQ to SQL and stored procedures click here . If you want to have a look on how to...(read more)

    Read the article

  • Backup Windows files using Ubuntu - Unable to find Win partition

    - by Siva
    I am using a Dell laptop with Windows 7, all of a sudden the HDD is not recognized by Win7. I wanted to backup the data in Win, so I made a Ubuntu 12.04.1 Live CD, and booted from it. I am using Ubuntu without installing it in my laptop. My problem is that I don’t see the Windows partitions in Ubuntu 12.04.1, b’cos of which I am unable to backup the data. Any suggestion in this regard would be very helpful.. PS: I checked the SMART status of the HDD it says 2 Bad sectors, when I attempted an extended Self-Test, I get a Read Failed message, though the Short test goes through fine. Thank You, Siva

    Read the article

  • What are Collaboration Data Objects (CDO)?

    - by Pranav
    Collaboration Data Objects or CDO, is a component that enables messaging between applications. It's something like the MFC we have in VC++ that enables us to prefer a simpler interface compared to the WIN32 API which, as an interface, still requires lots of escalation work by developers (yet very robust!). CDO is primarily built to simply the creations of messaging applications and we should keep in mind that CDO is NOT a new messaging model but is BUILT ON the MAPI architecture. It is just an extended interface that collaborates with MAPI and simplifies the programming task at hand for creation of messaging applications. CDO replaced Microsoft's earlier Active Messaging. CDO 1.2 enables us to play around with Data, send, receive emails and a host of other functions like rendering in exchange functionalities into HTML and do loads of other stuff. If you've got some firsthand experiences, a couple of tips will be great and will defiantly further my knowledge base in this area and hopefully get me a more refined understanding. Some pointers on MAPI will be pretty cool.

    Read the article

  • Extracting GPS Data from JPG files

    - by Peter W. DeBetta
    I have been very remiss in posting lately. Unfortunately, much of what I do now involves client work that I cannot post. Fortunately, someone asked me how he could get a formatted list (e.g. tab-delimited) of files with GPS data from those files. He also added the constraint that this could not be a new piece of software (company security) and had to be scriptable. I did some searching around, and found some techniques for extracting GPS data, but was unable to find a complete solution. So, I did...(read more)

    Read the article

  • Preseed Partman: multiple partitions on one disk /tmp /data /usr swap

    - by Moritz
    trying to get preseeding on 12.04 64bit with what should be a basic setup to work: /dev/sda - the only drive beeing used / - rootfs - 100GB /boot - 1GB /tmp - 10GB /data - should take all available space swap - 10GB - d-i partman-auto/expert_recipe string \ boot-root :: \ 1000 50 1000 ext4 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /boot } \ . \ 500 1000 10000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /tmp } \ . \ 500 5000 100000000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /data } \ . \ 64 2000 10000 linux-swap \ method{ swap } format{ } \ . \ 500 3000 100000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ / } \ . If i only use the code for /boot,swap and / it works. Also i was wondering weather i have to specify some other recipe name than "boot-root", but trying "thisNameIsNotDefinedInPartman" the result was the same. The Error message displayed by the ubuntu installer is always "no root file system is defined" Thanks for your help, Moritz

    Read the article

  • RPG Monster-Area, Spawn, Loot table Design

    - by daemonfire300
    I currently struggle with creating the database structure for my RPG. I got so far: tables: area (id) monster (id, area.id, monster.id, hp, attack, defense, name) item (id, some other values) loot (id = monster.id, item = item.id, chance) spawn (id = area.id, monster = monster.id, count) It is a browser-based game like e.g. Castle Age. The player can move from area to area. If a player enters an area the system spawns, based on the area.id and using the spawn table data, new monsters into the monster table. If a player kills a monster, the system picks the monster.id looks up the items via the the loot table and adds those items to the player's inventory. First, is this smart? Second, I need some kind of "monster_instance"-table and "area_instance"-table, since each player enters his very own "area" and does damage to his very own "monsters". Another approach would be adding the / a player.id to the monster table, so each monster spawned, has it's own "player", but I still need to assign them to an area, and I think this would overload the monster table if I put in the player.id and the area.id into the monster table. What are your thoughts? Temporary Solution monster (id, attackDamage, defense, hp, exp, etc.) monster_instance (id, player.id, area_instance.id, hp, attackDamage, defense, monster.id, etc.) area (id, name, area.id access, restriction) area_instance (id, area.id, last_visited) spawn (id, area.id, monster.id) loot (id, monster.id, chance, amount, ?area.id?) An example system-flow would be: Player enters area 1: system creates area_instance of type area.id = 1 and sets player.location to area.id = 1 If Player wants to battle monsters in the current area: system fetches all spawn entries matching area.id == player.location and creates a new monster_instance for each spawn by fetching the according monster-base data from table monster. If a monster is fetched more than once it may be cached. If Player actually attacks a monster: system updates the according monster_instance, if monster dies the instance if removed after creating the loot If Player leaves the area: area_instance.last_visited is set to NOW(), if player doesn't return to data area within a certain amount of time area_instance including all its monster_instances are deleted.

    Read the article

  • Keep Your Data Local: Free Offline Alternatives to 6 Popular Web Apps

    - by Chris Hoffman
    Web apps are all the rage, but offline apps still have their place. Whether you want better offline support or you just want to keep your sensitive data on your PC, there’s a free desktop app that can replace your web-based productivity app. We’ve looked at web-based alternatives to desktop apps, and now we’ll do the opposite. Here  are some solid — and completely free — offline desktop alternatives to popular web apps. Be sure to perform regular backups if you store your only copies of important data locally. You wouldn’t want to lose it all when your hard drive inevitably bites the dust.    

    Read the article

  • What do you use to bundle / encrypt data?

    - by David McGraw
    More and more games are going the data driven route which means that there needs to be a layer of security around easy manipulation. I've seen it where games completely bundle up their assets (audio, art, data) and I'm wondering how they are managing that? Are there applications / libraries that will bundle and assist you with managing the assets within? If not is there any good resources that you would point to for packing / unpacking / encryption? This specific question revolves around C++, but I would be open to hear how this is managed in C#/XNA as well. Just to be clear -- I'm not out to engineer a solution to prevent hacking. At the fundamental level we're all manipulating 0's and 1's. But, we do want to keep the 99% of people that play the game from simply modifying XML files that are used to build the game world. I've seen plenty of games bundle all of their resources together. I'm simply curious about the methods they're using.

    Read the article

  • programm and user data from an old hdd to a new installation on ssd

    - by hans wurst
    i tried everything since days, now i ask. i got a ubuntu system on my old hdd, wich is connectet via usb to this system and a sdd is build in my notebook. at the moment i am running a ubuntusystem from an usb stick. it have tried to clone my disk(cahnge uiid, etc), to transport the data via deja dup and much more. the result was nothing or strange things. my idea is to copy the important data form the old system to the new(home and whatever), but it is not allowed to do this. somebody her who know a tool wich can do this, or got an an other idea.

    Read the article

  • statistics for checking imported data?

    - by user1936
    I'm working on a data migration of several hundred nodes from a Drupal 6 to a Drupal 7 site. I've got the data exported to the new site and I want to check it. Harkening back to my statistics classes, I recall that there is some way to figure out a random number of nodes to check to give me some percentage of confidence that the whole process was correct. Can anyone enlighten me as to this practical application of statistics? For any given number of units, how big must the sample be to have a given confidence interval?

    Read the article

  • Generation 4 Modular Data Center

    - by kaleidoscope
    Microsoft’s launched Generation 4 Modular Data Center design at the PDC 09 - The 20-foot container built on container-based model. Microsoft says the use of server-packed containers – known as Pre-Assembled Components (PACs) – will allow it to slash the cost of building its new data centers. Completely optimized for outdoor use, with a design that relies upon fresh air (”free cooling”) rather than air conditioning. Its exterior is designed to draw fresh air into the cold aisle and expel hot air from the rear of the hot aisle. More details can be found at: http://www.datacenterknowledge.com/archives/2009/11/18/microsofts-windows-azure-cloud-container/   Rituraj, J

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >