Search Results

Search found 416 results on 17 pages for 'redundancy'.

Page 9/17 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Server 2003 Functional Domain DFS Replication Problem (Files being moved to conflicted folder for no reason)

    - by Az
    We have 2 Windows 2003 servers configured with a DFS namespace and we are running into problems with the redirected profiles we have setup. Basically, one server is the FSMO master for all roles, and we have another DC that is the DFS namespace primary server. We have profile redirection setup using the \dfsnamespace\userprofile formula. The FSMO master DC locks up occasionally (don't ask :), and when it does, and we bring it back up... All of the user profiles hosted on the DFS namespace get overwritten when a user logs in. The current profile gets moved to the conflicting and deleted items folder. This strikes me as really odd considering the whole point of using DFS was to provide some redundancy in case one server went down. Can anyone help? Thanks in advance! -Nate

    Read the article

  • What happens if an OpenStack cloud controller dies?

    - by magu
    I've been reading up on OpenStack and how we can re-create an EC2/S3-style cloud for our internal development and I'm having a hard time finding information on how the OpenStack cloud controller provides redundancy of the cloud management services. I know I can setup multiple Swift and Nova nodes, but not a single document/article/howto/wiki contains information on: a) what happens if the cloud controller node dies; and b) how to setup redundant cloud controllers. It seems to me that, although it is massively scalable, there is a big single-point-of-failure built into OpenStack. Can anyone with more experience on OpenStack please shed some light as to how it all works in regards to high-availability?

    Read the article

  • Use Excel Table Column in ComboBox Input Range property

    - by V7L
    I asked this in StackOverflow and was redirected here. Apologies for redundancy. I have an Excel worksheet with a combo box on Sheet1 that is populated via its Input Range property from a Dynamic Named Range on Sheet2. It works fine and no VBA is required. My data on Sheet2 is actually in an Excel Table (all data is in the XLS file, no external data sources). For clarity, I wanted to use a structured table reference for the combo box's Input Range, but cannot seem to find a syntax that works, e.g. myTable[[#Data],[myColumn3]] I cannot find any indications that the combo box WILL accept structured table references, though I cannot see why it wouldn't. So, two part question: 1. Is is possible to use a table column reference in the combo box input range property (not using VBA) and 2. HOW?

    Read the article

  • Disable RAID Controller

    - by B.Mr.W.
    I have some decent HP Proliants server that come with "HP Smart Array P410i Controller" enabled, I am using these boxes to set up a Hadoop cluster and I know, RAID is for sure a no-no for Hadoop since the application itself will take care of data redundancy and extra intelligence provided by RAID won't be helpful and might turn down the performance. I tried to disable the devices at the BIOS and the box cannot even access the disk afterwards. So I am assuming the controller is sitting between disks and mother board, and we have to turn it on and configure it to "level0" or something like that. I am wondering what should I do to "disable" the RAID functionality so it will fit into the Hadoop environment.

    Read the article

  • 5 x 3GB drives and 4 x 1500GB drive best raid setup?

    - by Zen_Silence
    Hello, I am building a file server my plan is the have the Operating system on one raid partition and the data storage on another partition. I currently have 5 x 3GB IDE drives that i would like to put the operating system on theses drives are old but that doesnt matter to me at the moment i have a ton of them so for this raid partition i would probably want to be able to pull out dead a drive and rebuild the array. My file partition is going to consist of 4 x 1.5TB SATA drives I would like the maximum storage with some redundancy. Any suggestions to which Raid level i should use would be greatly appreciated and if you could also suggest a PCI or PCI-e raid controller to handle theses arrays. Thanks in Advance, Zen_Silence

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • Crap, hard disk failure. Can I recover my "move"d folders?

    - by Doug
    I am in the process of moving all my files from an old laptop to new one. I just moved 11gb of data from my old laptop to a hard drive (external) and then upon moving it out to the new hard drive, the hard drive is getting a CRC (Data Error (Cyclic Redundancy Check). Now I am looking for a solution to recover the files that I moved on my old laptop (not the external). I understand they they are just marked for potential overwriting to free up space. I was getting ready to test out GetDataBack, but it says to install it on a healthy windows and use the recover-needed drive as an external. However, I don't want to turn off my computer without first getting the okay since it is in a "moved" state. Please help! What can I do to recover the Moved files. I haven't touched the computer since it has been moved. What can I use to recover them?

    Read the article

  • How can i use one Domain Controller to manage 3 separate small firms

    - by Plamen Jordanov
    currently we have one Domain Controller that have 15 users and cup off services(hMailServer, IIS, DNS, Active Directory). Now the owners of the firm created two new firms which computers and networks are my responsibility. Now i wonder how exactly to join users in existing domain. Did you think that is a good idea to just include all computers and user from all firms under one domain or there is another solution ? Did some of you run into this kind of situation and what did you do ? ---Edit--- Brent, Dan thank for info guys. For now i will follow Brent advice until we get the new server witch we will virtualize and the old server will be our second DC on different location. Heck we even might think some Pay-as-you-go VPS solution for DC redundancy.

    Read the article

  • building a home server with a nas appliance [closed]

    - by user51666
    Possible Duplicate: Best way to build home NAS with redundancy I was hoping to get some ideas from folks here. I'm interested in building a home web server with a nas appliance. It would be primarily used for storing pictures, video. I want a networked storage device so I can have multiple devices access it wirelessly as needed from within our home and also I want the option to access from outside the house using a login/pw access. I'm also interested in customizing, building my own web pages as well. Preferably apache. Any preferences? Does anyone have an interesting, neat set up they can share? Thanks!

    Read the article

  • Automatically make or update a copy real-time on another hard drive volume whenever files are saved to a particular folder

    - by mrblint
    Whenever I save or update a file to a particular designated folder on my C:\drive I would like to make or update a copy on my network-attached storage device, ideally saving the copy to the NAS as a version rather than overwriting a copy there, if possible. I have Windows 7 x64 Ultimate. Is there any feature built-in that can accomplish this? It has to be a real copy, not merely a pointer. I'm trying to achieve some redundancy for especially critical documents (in a variety of formats) that change frequently throughout the day. P.S. I am looking for folder-level granularity; I wouldn't want this to happen for every file on the C: volume.

    Read the article

  • netgear GS108TV2 RSTP configuration

    - by jhowland
    I have a large set of GS108TV2 units--my goal is to set up a network which is comprised of several loops for redundancy/fault tolerance. I have a minimal 3 switch loop configured, with RSTP enabled on two ports on each switch. I have my bridge max age set to 6, and my bridge forward delay set to 4, which are the minimum values allowed. Hello time is fixed at 2 seconds. The switches respond to a cable being removed from a socket, but it takes too long. I cannot get the switch to respond to a loss of connection on one of the redundant ports in less than 20 seconds. Is there any way to configure these switches to respond faster than 20 seconds? That is unacceptable for my application. thanks in advance for any help

    Read the article

  • High Availability Clustering and Virtualization

    - by tmcallaghan
    I'm trying to understand how the various virtualization vendors (specifically Amazon EC2, but also VMware and Xen) enable software vendors to provide a real HA solution in the environment where the servers are virtualized. Specifically, if I'm running any HA application (exchange, databases, etc) I need to ensure that my redundant virtual "servers" aren't located on the same physical server. Using in-house virtualization solutions (VMware, Xen, etc) I can provision accordingly as well as check the virtual - physical arrangement. I could, however, accidentally "vmotion" to the same physical hardware. With EC2, I don't even have the ability at provision time to select different physical servers. Since their Cluster Compute Instances are 1 virtual server per physical server it seems to be the only way to guarantee I don't have a false sense of redundancy. Any ideas or thoughts would be helpful. What are others doing about this problem? If the vendors provided an API where I could get something as simple as a unique physical system identifier I could at least know if I'm going to have an issue. -Tim

    Read the article

  • Hard disk failure. Can I recover my "move"d folders?

    - by Doug
    I am in the process of moving all my files from an old laptop to new one. I just moved 11gb of data from my old laptop to a hard drive (external) and then upon moving it out to the new hard drive, the hard drive is getting a CRC (Data Error (Cyclic Redundancy Check). Now I am looking for a solution to recover the files that I moved on my old laptop (not the external). I understand they they are just marked for potential overwriting to free up space. I was getting ready to test out GetDataBack, but it says to install it on a healthy windows and use the recover-needed drive as an external. However, I don't want to turn off my computer without first getting the okay since it is in a "moved" state. Please help! What can I do to recover the Moved files. I haven't touched the computer since it has been moved. What can I use to recover them?

    Read the article

  • Will the VMs on the ESXi cluster be running after disconnection from the network?

    - by John
    What will happen if I disconnect the ESXi cluster configured with HA from the switch(I need to change the power source on the main switch) and there is no management port redundancy? I'm going to disable the host monitoring and VM monitoring within HA settings, and connect to the switch after it boots up. Will the virtual machines be running if I disconnect the hosts from the network so they will not see any other ESXi host? I hope everything will work fine and the hosts will join the cluster after they connect to the network again, but I would like to be sure ..

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • redundant http load balancer

    - by jrydberg
    Got a simple scenario with two web servers for redundancy and to scale. But how do I make a two web-server setup fully redundant? I can think of two solutions; two web servers, one load balancer spreading the load. one extra machine for the load balancer. but how will the load balancer be redundant? two machines, each running the web server AND running a load balancer, spreading the load over. have a DNS entry point to both of the machines. no extra machines needed for load balancing. How do you guys normally solve this kind of problem?

    Read the article

  • OSX Dirve dropped out of RAID 5 Array?

    - by user41724
    I had a drive "Fail" and drop out of our raid 5 array... After reboot the drive came back up, but its "Roaming". How do I re-integrate it back into the RAID 5? Right now I have no redundancy with only 2 disks in the array. The migrate RAID Set feature of RAID Utility seems to want to crate a RAID 0 only? I have provided links to some screen caps. http://tiny.cc/3ns5r Any help would be appreciated. Thanks

    Read the article

  • Windows Server: Do I really need servers in remote locations?

    - by IMAbev
    I have one main site with several servers an a 2008/2012 environment. I have 4 remote sites that are physically close (a few miles apart) and are all connected to the main site by 20meg fiber on a private network. At each of the remote locations I have a windows server that users log in to and where their files and apps are located. There are many considerations to answering this question. But the first thing I am wondering is do I really need a server at each location? Users are just logging in to this server for permissions and a vast majority of my users are only using word, excel and email. I am really interested in figuring out if I need servers at these locations. $3,000 to $4,000 per server every 3-5 years, licensing, administration... I know there are other considerations - speed, redundancy, if my link to the main site goes down the users have nothing. But I just am not convinced I need servers at these locations.

    Read the article

  • Hardware needed for 2000 users? [closed]

    - by Trcx
    I have school assignment that is fairly well defined, requiring us to come up with a plan for an environment serving dynamic web applications to 2000 users, and should be able to scale up to six thousand. I have done plenty of research as far as load balancing, redundancy, UPSs, etc, but am having a hard time figuring out how much hardware is actually needed in the way of physical servers, ram, processing power, etc. The assignment states that the server will have a lot of dynamic code, email, and a database are required, all utilizing the appropriate microsoft service (MS SQL, Exchange, IIS). I already plan on splitting them out on to separate servers, but can't even fathom the hardware requirements of something that large scale. Could someone with experience weight in on this, or point me two some good articles?

    Read the article

  • How to remove a drive from a 2-drive RAID 5 array?

    - by DrSAR
    There is some information available on shape changes in RAID arrays but I'm a little nervous and would like confirmation: Problem: I have 2 500GB drive as software raid 5 (mdadm). I would like to free one of the two drives since RAID-redundancy is for wimps... Can I just mdadm --grow --array-size=1 followed by a mdadm --grow --raid-disks 1? This seems too simple. How would I specify which drive gets freed? Part of the reason for this maneuver is that I don't have additional space to run a backup.

    Read the article

  • KB Articles on My Oracle Support

    - by Anthony Shorten
    My Oracle Support is a valuable resource for product information and how to's. It is not just about bug fixes and service packs. To find articles pertaining to any Oracle Utilities product you logon to My Oracle Support (your DBA shoud have access at least) and use the following path to Navigate to the articles: Knowledge - More Applications - Industry Solutions - Utilities You are then presented with a list of products, just select the one that you are interested in. You are then pressented with a list of articles available (25 per page). You can also search on keywords for articles. Here is a list of ones I find useful (with KB ID in []): Customer Care and Billing V2.2.0 Unix Installation Questions [ID 844645.1] Known Framework (FW) Errors [ID 783823.1] Weblogic 10 MP2 CCB Support Question [ID 1119383.1] CCB v2.2.0 Performance Problem Under Heavy Concurrent User Load [ID 808233.1] - This is a description of a patch for performance What Is The Meaning Of The TRUE And FALSE Setting For REL_CBL_THREAD_MEM Within OUAF For Oracle Utilities CCB, BI & ETM [ID 783444.1] Oracle Utilities Framework Support Utility [ID 1079640.1] How to customize XAI error messages? [ID 1061394.1] Oracle Utilities Application Framework - Patch Installation [ID 974985.1] Action Plan for Creating a Weblogic Custom Authentication Provider [ID 954417.1] How to set up XAI service on multiple servers to provide redundancy? [ID 854215.1] The first one is very useful and answer lots of how to questions for installation.

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • Game Changer Appliance for SMBs Powered by Oracle Linux

    - by Zeynep Koch
    In the November 28th CRN article  Review: Thumbs-Up On Oracle Database Appliance  , Edward F. Moltzen mentions that "The Test Center likes this appliance (Oracle Database Appliance) , for the performance and for the strong security offered by the underlying Oracle Linux in the box. It’s more than a solid offering for the SMB space; it’s potentially a game-changer as data and security needs race to keep up with the oncoming generations of technology." The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g—in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and network that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. All hardware and software components are supported by a single vendor—Oracle—and offer customers unique pay-as-you-grow software licensing to quickly scale from 2 processor cores to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades. It is: Simple—Complete plug-and-go hardware and software Reliable—Advanced management features and single-vendor support Affordable—Pay-as-you-grow platform for small database consolidation The Oracle Database Appliance is a 4U rack-mountable system pre-installed with Oracle Linux and Oracle appliance manager software. Redundancy is built into all components and the Oracle appliance manager software reduces the risk and complexity of deploying highly available databases. It's perfect for consolidating OLTP and data warehousing databases up to 4 terabytes in size, making it ideal for midsize companies or departmental systems. Read more about Oracle's Database Appliance  Read more about Oracle Linux

    Read the article

  • What is a good way to store tilemap data?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. This concerns only the file system storage of the tile maps, not the data structure to be used by the game while it is running. The tile map is loaded into a 2D array, so this question is about which source to fill the array from. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON/XML: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • 2D Tile Map files for Platformer, JSON or DB?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >