Search Results

Search found 18003 results on 721 pages for 'nidhinzz own'.

Page 314/721 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • What is the best way to code the XNA Game Server for FPS game?

    - by AgentFire
    I'm writing a FPS XNA game. It gonna be multiplayer so I came up with following: I'm making two different assemblies — one for the game logic and the second for drawing it and the game irrelevant stuff (like rocket trails). The type of the connection is client-server (not peer-to-peer), so every client at first connects to the server and then the game begins. I'm completly decided to use XNA.Framework.Game class for the clients to run their game in window (or fullscreen) and the GameComponent/DrawableGameComponent classes to store the game objects and update&draw them on each frame. Next, I want to get the answer to the question: What should I do on the server side? I got few options: Create my own Game class on the server, which will process all the game logic (only, no graphics). The reason why I am not using the standart Game class is when I call Game.Run() the white window appears and I cant figure out how to get rid of it. Use somehow the original XNA's Game class, which is already has the GameComponent collection and Update event (60 times per second, just what I need). UPDATE: I got more questions: First, what socket mode should I use? TCP or UDP? And how to actually let the client know that this packet is meant to be processed after that one? Second, if I is going to use exacly GameComponent class for the game objects which is stored and process on the server, how to make them to be drawn on the client? Inherit them (while they are combined to an assembly)? Something else?

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • Are very short or abbreviated method/function names that don't use full words bad practice or a matter of style.

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • A new CAPTCHA using sentences?

    - by Xeoncross
    I was just thinking about how recaptcha is getting harder when I thought about another posible solution. Images won't last forever so we will need something else some day - like human logic or emotion. Google and others are trying grouping images by category (find the image that doesn't belong) but that requires a large amount of images and doesn't work for the blind. Anyway, what if a massive collection of text was gathered (public-domain books from each language) and a sentence was shown to the user with 1 (or 2) words that were a select box of choices? Only computers that knew correct English/Spanish/German grammar would be able to tell which of the words belonged in the sentence. Would there be any problems with this approach? I would assume that it would be easy enough for anyone that knew the language that the sentense was displayed in to figure out the answer easier than trying to read the reCAPTCHA text. Plus, storing an insane number of sentences would only take a couple gigabytes of space and wouldn't take anywhere near the CPU time creating images/audio takes. In other words, anyone could host their own captcha system with minimal impact on system performance. Is there a problem with this approach? More specifically I'm looking for the main problem with this approach. migrated from stackoverflow

    Read the article

  • Is there a design pattern for chained observers?

    - by sharakan
    Several times, I've found myself in a situation where I want to add functionality to an existing Observer-Observable relationship. For example, let's say I have an Observable class called PriceFeed, instances of which are created by a variety of PriceSources. Observers on this are notified whenever the underlying PriceSource updates the PriceFeed with a new price. Now I want to add a feature that allows a (temporary) override to be set on the PriceFeed. The PriceSource should still update prices on the PriceFeed, but for as long as the override is set, whenever a consumer asks PriceFeed for it's current value, it should get the override. The way I did this was to introduce a new OverrideablePriceFeed that is itself both an Observer and an Observable, and that decorates the actual PriceFeed. It's implementation of .getPrice() is straight from Chain of Responsibility, but how about the handling of Observable events? When an override is set or cleared, it should issue it's own event to Observers, as well as forwarding events from the underlying PriceFeed. I think of this as some kind of a chained observer, and was curious if there's a more definitive description of a similar pattern.

    Read the article

  • salted passwords confusion

    - by Vasiliy Stavenko
    I'm setting up email server for the first time and confused with strange thing. I have several user accounts which stored in previous server. Passwords for this accounts are in plain text. But I want to create crypts for them. Mysql (where my users will be stored) have function encrypt(passwd, salt). If no salt given used random value. I discovered that courier uses one certain salt and crypted all passwords with it. So the task done. But I'd like to know if there's a way to define my own salt for my pop3 server?

    Read the article

  • Shrew VPN Client gives default route- changing the policy stops me from accessing VPN network

    - by Lock
    I am using the shrew client to connect to what I believe is a Netscreen VPN. Now, when connected, the client adds the VPN as the default route. I do not want this- there is only 1 network behind the VPN that I need to access. I found that with the shrew client, you can change the "Policy" settings on the connection, and can add your own networks in that should tunnel over the VPN. I do this, and add my network in, but when I connect the VPN, I get nothing. Can't access the network. Any idea why this would be? I can see my network in the routing table, and its correctly pointing to the correct gateway. A traceroute shows all time-outs, so I can't be 100% sure that it is trying to tunnel over the VPN. Any idea how I can troubleshoot this?

    Read the article

  • Mailbox move issue from Exchange 2003 to Exchange 2010

    - by Ryan Roussel
    Today while moving mailboxes between Exchange 2003 and Exchange 2010, I hit an issue with a couple of mailboxes.  These mailboxes all popped access denied errors or more exactly: Insufficient Access Rights to perform the operation.   The cause was similar to the mail flow issue in that inheritable permissions were not turned on for the user object in Active Directory.  This also presented it’s own unique problem in that since the initial move request failed because of permissions, it had to be cleared before a new move request could be created. On top of that, the request did not show up in the EMC.  I used the following process to clear the request, assign permission, then create a new request:   1. First you need to know the ExchangeGUID of the mailbox for the remove-moverequest command.  To quickly get the GUID for a mailbox simply run:         2. Next we need to clear out the move request using PowerShell by running: [PS] c:\>Remove-moverequest -moverequestqueue "mailbox database 1030639620" -mailboxguid 8525686f-d4d3-42b7-92f1-46d77ea841a3   3. Then to re-establish inheritable permissions. This can be done by using AD Users and Computers, switching to View Advanced Features, then under the Security tab of the object.  Click Advanced, then check “allow inheritable permissions of parent to propagate to this object”   4. Once the Inheritable permissions are restored, we need to create a new move request: NOTE:  The EMC can also be used to initiate the Move Request once the permissions are corrected. [PS] c:\>New-moverequest –identity jyoung  -baditemlimit 100 -targetdatabase "mailbox database 1030639620"   And that’s it.  The mailbox should move over smoothly with no access denied error.

    Read the article

  • How do I download photos tagged of me from Facebook?

    - by Keith
    I want to be able to download (and back up) photos tagged of me in Facebook. I'm specifically not interested in my own photo albums - I uploaded them and therefore have them already in better quality than FB. What I want are the photos other have uploaded that have me in them. I have a couple of hundred of these now, and don't much fancy three hours of right-click save-as... There seem to be a couple of utilities that pop up with a quick search, but (call me paranoid) I'm wary about giving some random freeware app my login and password. Social safe looked promising, but as it doesn't support this feature at the moment it's kinda pointless. Can anyone recommend one that they've actually used? I'd consider an open source one - I'm a programmer and don't mind digging through to check that it doesn't do anything nasty.

    Read the article

  • Database modularity with EBS volumes

    - by Eclyps19
    I would like to add modularity to my websites on EC2 instances by encapsulating the site files and the mysql files in their own EBS volumes. The end result that I'm going for is the ability to quickly mount a volume or two to different servers running the same AMI (for testing/development/emergency maintenance, etc), as well as maintain separate snapshots of each. I'm able to do this fairly easily with a single database by symlinking my mounted database EBS to the appropriate places (/var/lib/mysql, /etc/my.cnf, /var/log/mysqld.log), but I'm not sure if it would even be possible be possible to have multiple databases on different EBS volumes running concurrently. Example: /website1/www.website.com /database1/ /website2/www.otherwebsite.com /database2/ Could anybody shed some light on this for me? Is it possible? Is it a bad idea? Thanks.

    Read the article

  • Tripple-Head Setup: Raedon 5770

    - by Aren B
    I currently own a Sapphire Raedon HD 5770 1GB w/ DDR5 (Link) I've got two LCDs set up in this configuration: +---------++------+ | || | | 1 || 2 | +---------++------+ Im looking into buying a new TV/Monitor, a Samsung T240HD (h**p://www.memoryexpress.com/Products/PID-MX22473(ME).aspx) and I'd like to set up a tripple monitor setup like this (new monitor being #3) +---------++------++---------+ | || || | | 3 || 2 || 1 | +---------++------++---------+ Monitor 1: DVI Monitor 2: DVI Monitor 3: HDMI PS3 - Monitor 3: HDMI2 Is this possible with my current video card? Can I plug in 2x DVI + 1x HDMI and get a third display? Or am I going to have to buy a slew of Display Port Adapters? I know older video cards you could only have 2 active displays, but I heard that barrier was defeated with the Display-Port series video cards.

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

  • LibGdx efficient data saving/loading?

    - by grimrader22
    Currently, my LibGDX game consists of a 512 x 512 map of Tiles and entities such as players and monsters. I am wondering how to efficiently save and load the data of my levels. At the moment I am using JSON serialization for each class I want to save. I implement the Json.Serializable interface for all of these classes and write only the variables that are necessary. So my map consists of 512 x 512 tiles, that's 260,000 tiles. Each tile on the map consists of a Tile object, which points to some final Tile object like a GRASS_TILE or a STONE_TILE. When I serialize each level tile, the final Tile that it points to is re-serialized over and over again, so if I have 100 Tiles all pointing to GRASS_TILE, the data of GRASS_TILE is written 100 times over. When I go to load/deserialize my objects, 100 GrassTile objects are created, but they are each their own object. They no longer point to the final tile object. I feel like this reading/writing files very slow. If I were to abandon JSON serialization, to my knowledge my next best option would be saving the level data to a sql database. Unless there is a way to speed up serializing/deserializing 260,000 tiles I may have to do this. Is this a good idea? Could I really write that many tiles to the database efficiently? To sum all this up, I am trying to save my levels using JSON serialization, but it is VERY slow. What other options do I have for saving the data of so many tiles. I also must note that the JSON serialization is not slow on a PC, it is only VERY slow on a mobile device. Since file writing/reading is so slow on mobile devices, what can I do?

    Read the article

  • As an indie game dev, what processes are the best for soliciting feedback on my design/spec/idea? [closed]

    - by Jess Telford
    Background I have worked in a professional environment where the process usually goes like the following: Brain storm idea Solidify the game mechanics / design Iterate on design/idea to create a more solid experience Spec out the details of the design/idea Build it Step 3. is generally done with the stakeholders of the game (developers, designers, investors, publishers, etc) to reach an 'agreement' which meets the goals of all involved. Due to this process involving a series of often opposing and unique view points, creative solutions can surface through discussion / iteration. This is backed up by a process for collating the changes / new ideas, as well as structured time for discussion. As a (now) indie developer, I have to play the role of all the stakeholders (developers, designers, investors, publishers, etc), and often find myself too close to the idea / design to do more than minor changes, which I feel to be local maxima when it comes to the best result (I'm looking for the global maxima, of course). I have read that ideas / game designs / unique mechanics are merely multipliers of execution, and that keeping them secret is just silly. In sharing the idea with others outside the realm of my own thinking, I hope to replicate the influence other stakeholders have. I am struggling with the collation of changes / new ideas, and any kind of structured method of receiving feedback. My question: As an indie game developer, how and where can I share my ideas/designs to receive meaningful / constructive feedback? How can I successfully collate the feedback into a new iteration of the design? Are there any specialized websites, etc?

    Read the article

  • Are there open source alternatives to Bitbucket, Github, Kiln, and similar DVCS browsing and management tools?

    - by Ryan Taylor
    I am aware of several tools/services that provide DVCS browsing and management such as Bitbucket, Github, Kiln, SCM-Manager and Rhodecode. However, the use case I am considering is one such that: Any source code must reside on an employers internal servers. The solution must be open source. It should provide a Bitbucket or Github like experience, including a project wiki, repository browsing and management, and social coding aspects such as code review. The solution should have mercurial support (if not support for other DVCSs). Of these, only SCM-Manager and RhodeCode come close as they can be installed on your own servers and are open source. However they do not have the Bitbucket or Github experience. There is no issue tracker or wiki and the UI, while functional, is not up to par with Github or Bitbucket. I can get close with Trac or Redmine with their repository browsers but unfortunately they do not have any repository management capabilities. Are there other open source tools out there that would provide a similar experience to Bitbucket, Github or Kiln?

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Execute background program in bash without job control

    - by Wu Yongzheng
    I often execute GUI programs, such as firefox and evince from shell. If I type "firefox &", firefox is considered as a bash job, so "fg" will bring it to foreground and "hang" the shell. This becomes annoying when I have some background jobs such as vim already running. What I want is to launch firefox and dis-associate it with bash. Consider the following ideal case with my imaginary runbg: $ vim foo.tex ctrl+z and vim is job 1 $ pdflatex foo $ runbg evince foo.pdf evince runs in background and I get me bash prompt back $ fg vim goes foreground Is there any way to do this using existing program? If no, I will write my own runbg.

    Read the article

  • Network architecture when using Rackspace's Cloud

    - by brianz
    I'm planning on launching a web application soon, and have decided on using Rackspace's Cloud offering with Debian. I'm not expecting that much traffic to start, but would rather get the architecture correct now even with the small VPSs. The thing I'm not quite sure about is how many VPSs I should get. At a minimum, I know I'll want three VPSs: Two Apache webservers One server for MySQL I'd also like: Nginx load balancer MySQL replication memcached I'm not sure where those last three processes should be running. Can the load balancer run on the same machine as the MySQL slave, or should they each run on their own machine? Does memcached run alongside the webservers or on different machines?

    Read the article

  • Distinction between API and frontend-backend

    - by Jason
    I'm trying to write a "standard" business web site. By "standard", I mean this site runs the usual HTML5, CSS and Javascript for the front-end, a back-end (to process stuff), and runs MySQL for the database. It's a basic CRUD site: the front-end just makes pretty whatever the database has in store; the backend writes to the database whatever the user enters and does some processing. Just like most sites out there. In creating my Github repositories to begin coding, I've realized I don't understand the distinction between the front-end back-end, and the API. Another way of phrasing my question is: where does the API come into this picture? I'm going to list some more details and then questions I have - hopefully this gives you guys a better idea of what my actual question is, because I'm so confused that I don't know the specific question to ask. Some more details: I'd like to try the Model-View-Controller pattern. I don't know if this changes the question/answer. The API will be RESTful I'd like my back-end to use my own API instead of allowing the back-end to cheat and call special queries. I think this style is more consistent. My questions: Does the front-end call the back-end which calls the API? Or does the front-end just call the API instead of calling the back-end? Does the back-end just execute an API and the API returns control to the back-end (where the back-end acts as the ultimate controller, delegating tasks)? Long and detailed answers explaining the role of the API alongside the front-end back-end are encouraged. If the answer depends on the model of programming (models other than the Model-View-Controller pattern), please describe these other ways of thinking of the API. Thanks. I'm very confused.

    Read the article

  • Restoring an Ubuntu Server using ZFS RAID-Z for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAID-Z disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAID-Z array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAID-Z array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Move model forward base on model orientation

    - by ChocoMan
    My model rotates on it's own Y-axis regardless of where it is in the world. Here are the controls for the left ThumbStick: UP (move model forward on Z-Axis) DOWN (move model backward on Z-Axis) LEFT & RIGHT (strafe to either side) The problem is adjusting the direction the model's orientation UP and DOWN if the player should also rotate the player while moving forward or backwards. An example what Im trying to achieve would be a car doing donuts. The car is always facing the current direction that it interprets as forward (or rear as backwards) in relation to it's local rotation. Here is how Im calling the movement: // Rotate model with Right Thumbstick along X-Axis modelRotation -= pController.ThumbSticks.Right.X * mRotSpeed; // Move Forward if (pController.IsButtonDown(Buttons.LeftThumbstickUp)) { modelPosition.Z -= -pController.ThumbSticks.Left.Y * speed; } // Move Backward if (pController.IsButtonDown(Buttons.LeftThumbstickDown)) { modelPosition.Z += pController.ThumbSticks.Left.Y * speed; } // Strafe Left if (pController.IsButtonDown(Buttons.LeftThumbstickLeft)) { modelPosition.X += -pController.ThumbSticks.Left.X * speed; } // Strafe Right if (pController.IsButtonDown(Buttons.LeftThumbstickRight)) { modelPosition.X -= pController.ThumbSticks.Left.X * speed; } // DeadZone if (!pController.IsButtonDown(Buttons.LeftThumbstickUp) && !pController.IsButtonDown(Buttons.LeftThumbstickDown) && !pController.IsButtonDown(Buttons.LeftThumbstickLeft) && !pController.IsButtonDown(Buttons.LeftThumbstickRight)) { }

    Read the article

  • Reduce Windows DNS Service caching on Window

    - by Nick G
    I'm struggling with DNS caching issues on a Windows based LAN. I've noticed that if I change a DNS record on a domain hosted by a 3rd party nameserver, that I always seem to be the very last person to see the change happen. I can often query the domain using a service which checks propagation around the world like www.whatsmydns.net but I usually find that all other DNS servers are correct and it's only my own server which has the old IP - even 8-12 hours later. This is an issue for us as we're website developers and often making changes to DNS records so these huge delays are frustrating. It seems to be because our primary domain controller server (+Active Directory & DNS) on our LAN (which is also our local DNS server) caches records for AGES (Way beyond it's published TTL). How can I stop the Windows DNS server from caching, or reduce the caching to only an hour or so?

    Read the article

  • Dot Matrix printers setup...

    - by Parhs
    Hello! I am using debian which is similar to ubuntu. They have 7 dot matrix printers some very old like this one http://www.omnidatasys.net/product/desc_printer_ti880.htm which works from 1979 daily and at text is faster than many inkjects. I believe that it has his own language... Sending text to serial port (port server) prints garbage. However i think is prints only capital english up to 95 asccii and greek and the rest up to 127 i think greek capital.(special chip ) Sending english capital letters prints garbage i think but i amnt sure... i will try again... The other printer are ESC/P compatible and i use generic epson driver provided from ghostscript... However i think that sending text via lp -dpr1 filename It prints the text as a grafic...Changing from printer font face(courier,times roman etc) or pitch has no effect... I am wondering if is there any work arround for this? In AIX they claim that lp command printed output as text as it prints and cobol programs send raw text to to lp printers . However in AIX they use some custom filters for the printers and has more options for dot matrix printers.. I would like to know if there is a solution for this.. To avoid graphics mode for text and change font face somehow.. The most Straight-through approach would be to use no driver ,just send ESC/P from cobol but this requires too much work... Thank you again!

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >