Search Results

Search found 35892 results on 1436 pages for 'a different ben'.

Page 202/1436 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • How to keep a generic process unique?

    - by Steve Van Opstal
    I'm currently working on a project that makes connection between different banks which send us information on which that project replies. A part of that project configures the different protocols that are used (not every bank uses the same protocol), this runs on a separate server. These processes all have unique id's which are stored in a database. But to save time and money on configurations and new processes, we want to make a generic protocol that banks can use. Because of PCI requirements we have to make a separate process for every bank we connect to. But the generic process has only 1 unique identifier and therefor we cannot keep them apart. Giving every copy of that process a different identifier is as I see it impossible because they run entirely separate. So how do I keep my generic process unique?

    Read the article

  • Licensing a collaborative research project

    - by Marcus Jones
    I am involved with an international research project which involves many different universities, national labs, and companies. The project is developed by national grants and in-kind support. One task in the project is to develop code to streamline workflow in our domain (energy simulation) by scripting common pre- and post-processing tasks for different tools. We want this code to be freely distributable to the simulation community. How can we ensure that this effort is digestible by the legal departments of these different parties such that the people involved can freely code?

    Read the article

  • Change players state and controls in-game

    - by Samurai Fox
    I'm using Unity 3D Let's say the player is an ice cube. You control it like a normal player. On press of a button, ice transforms (with animation) into water. You control it completely different than the ice cube. Another great example would be: Player is human being and has normal FPS controls. On press of a button human transforms into birds and now has completely different controls. Now, my question is, what would be easier and better: make one object with animation transition and to stay in that state of anim. until button is pressed again make two object: ice and water. Ice has an animation of turning into water. So replace ice (with animation) with water object And if anyone knows this one too: how to switch between 2 different types of player controls.

    Read the article

  • Using 301 Redirects on new site when access to old site denied?

    - by Cape Cod Gunny
    I have a situation where I'm standing up a new website on a different web host. I've been denied access to the old site by the hosting company and the old site will most likely be turned off very soon. If my new site contains pages that are named slightly different how do I go about setting up 301 redirects on my new site? For example: www.oldsite.com\aboutus\ www.newsite.com\aboutus.html www.newsite.com\productx.html www.oldsite.com\productx\ Edit: Clarification: The old domain name is different from the new domain name. On my newsite do I just duplicate every page that existed on the old site and place redirect code inside those pages? What does the redirect code look like?

    Read the article

  • How do you organize your projects?

    - by Sergio
    Do you have any particular style of organizing projects? For example, currently I'm creating a project for a couple of schools here in Bolivia, this is how I organized it: TutoMentor (Solution) TutoMentor.UI (Winforms project) TutoMentor.Data (Class library project) How exactly do you organize your project? Do you have an example of something you organized and are proud of? Can you share a screenshot of the Solution pane? In the UI area of my application, I'm having trouble deciding on a good schema to organize different forms and where they belong. Edit: What about organizing different forms in the .UI project? Where/how should I group different form? Putting them all in root level of the project is a bad idea.

    Read the article

  • Application compatability with the Cinnamon Desktop

    - by fossfreedom
    I've reading about about the Cinnamon Desktop through this Q&A: How do I install the Cinnamon Desktop? I've also been looking at the various possible various flavours of Ubuntu such as Unity/Gnome-shell and KDE: How do I install KDE shell on Ubuntu? How do I install and use the latest version of GNOME Shell? My concern is about running applications on Cinnamon - I'm fairly new to linux and am unsure about running software - especially about potentially mixing different software that may have been targeted at different desktops. Can I run a Kubuntu or Unity applications in Cinnamon or do I have to get specific Cinnamon only applications? For example, KOffice/Libreoffice, Okular/evince etc. Any help understanding what potential impacts such as extra software I may need to use or additional configuration to make sure that various different software behave and look like each other would be gratefully received.

    Read the article

  • Continuous Collision Detection Techniques

    - by Griffin
    I know there are quite a few continuous collision detection algorithms out there , but I can't find a list or summary of different 2D techniques; only tutorials on specific algorithms. What techniques are out there for calculating when different 2D bodies will collide and what are the advantages / disadvantages of each? I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if an algorithm breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • How to determine the version and origin of proprietary drivers installed by Additional Drivers?

    - by Bribles
    How can I tell which version and from which repository the Additional Drivers tool is trying to install the fglrx graphics driver? It says that I have a different version of the driver in use. I installed the driver from maverick/restricted and apt-cache tells me it's from a regular Ubuntu mirror. The installed version is the same as the candidate version. Can I get Additional Drivers to tell me what it would install if I activated the driver through it? Is it possible Additional Drivers just assumes it's a different version since it was installed by a different process?

    Read the article

  • How do I implement Unreal-like object serialization?

    - by MrWiggels
    Recently, I've been working on the core of my engine, and as I'm moving forward I find myself developing throwaway code to read files and simple data into the engine. This got me thinking about how I should implement a file management system. After a bit of googleing I came across the Unreal Package format, and boy does it look like the perfect one. I think it's good because the way how it allows you to separate different assets into different packages and allow something like a level to reference the different packages. I was just wondering, is this possible with C#? Because the built-in serialization API in .NET does not seem to support any form of this, only reading and writing to a single file.

    Read the article

  • New windows from the dock in Gnome Shell

    - by Andrea
    I am using Gnome Shell on Ubuntu 11.10, and I frequently use workspaces, as the shell encourages to do. My only complaint is that it is slow to place different windows of the same program in different workspaces. What I would like to do: click on an empty workspace, click on the Nautilus icon in the dock and browse to the correct folder. Then click on another empty workspace, click on the Nautilus icon and so on. This does not work: the second time I open Nautilus, the dock lets me switch to the previous instance, which is almost never what I want. So I have to click on the Nautilus icon, open a new window, place it on a different desktop, switch to that folder and finally browse to the correct folder. Is there a way to simplify this flow? It would be even better if I was able to link a specific folder from the dock, or better yet to have something similar to a Unity lens, where I can choose between the most used folders.

    Read the article

  • What are options for 3rd Party Centralized Software Settings Management?

    - by Jeff Martin
    I am an architect in an enterprise looking to build a SaaS solution. Our products are distributed over many different deployable containers, Web Services, Web UI's, etc. I am looking for some open-source or 3rd party software solution to manage the settings of our application. These would be similar to the settings you might find in Word or Eclipse or Visual Studio. The settings would control various behaviors and features of the product. (Probably not settings like which database to connect to but more like, should I show line numbers on the page or not by default..). Ideally, we would be able to store values for different dimensions (by tenant, by user, by application environment... ) Because we have so many different deployables, I am looking for a centralized solution that can provide a web service that each of the deployables can get their individual settings from. Does anyone know of a centralized service providing this sort of features or give me some help in searching for an alternative to rolling our own?

    Read the article

  • Responsive Menu Navigation [closed]

    - by Aaron Brewer
    I am sure you all have heard of Responsive/Adaptive Website Design and Development, but for the sake of beginners to the technique and skill, what are ways to create a Responsive Menu Navigation? I know there are a few standard ways, hence: JavaScript/jQuery Menu that changes functionality at different breakpoints. CSS3 Menu that changes functionality at different breakpoints. If you have had the opportunity to create a Responsive Menu, what technique did you use? How did you do it? Do you have an example? Did your Menu change functionality at different break points? To read up on Responsive and Adaptive Design: http://johnpolacek.github.com/scrolldeck.js/decks/responsive/ To read up on Responsive and Adaptive Design Menus: http://blog.usabilla.com/10-tips-how-to-handle-responsive-navigation-menus-successfully/ I hope this will save Pro Webmasters plenty of duplicate questions.

    Read the article

  • Disable outbound links without letting others know that

    - by tadoman
    Is there a way I can tell google not to follow external links ( pointing to other sites) without letting other know. I know you can disable outbound links by putting rel=nofollow or something in robots.txt. But that's something others can see as well. I'm just wondering if there's a way to tell google not to follow those links without letting others know that... like a setting in webmaster tools or something similar ( there's definetly one way. I could set an exception in my conf file for my server to check the user agent to be "googlebot" and then serve a different version of robots.txt. So that when a different user would check that link it would return a different robots.txt thant the one served to googlebot. However I'm not too sure google would be too happy about this) Thank you

    Read the article

  • 3D RTS pathfinding

    - by xcrypt
    I understand the A* algorithm, but I have some trouble doing it in 3D to suit the needs of my RTS Basically, in the game I'm making, there will be agents with different sizes of OBB collision boxes. I can use steering behaviours for avoiding other agents, so I don't need complete dynamic pathfinding. However, there is a problem because different agents have different collision geometry, and structures can be placed in almost any place. This means that there might be a gap between two structures where some agents can go through and some can't. A solution I have found to this problem is to do a sweep of the collision geometry of the agent from start node of the edge the pf algorithm is currently testing, to the end node of that edge. But this is probably a bit overkill since every edge the algorithm tests would also have to create and test with a collision geometry sweep. What are some reasonable approaches to this problem? I should mention that I'd prefer not to use navmeshes, I prefer waypoints because my entire system is based on it atm.

    Read the article

  • Parsing glGetShaderInfoLog() to get error info. Is this reliable, or is there a better way?

    - by m4ttbush
    I want to get a list of errors and their line numbers so I can display the error information different to how it's formatted in the error string, and also show the line in error. It looks easy enough to just parse the result of glGetShaderInfoLog(), look for "ERROR:" then read the next number up to : and then the next, and then the error description up to the next newline. But the OpenGL docs say "Application developers should not expect different OpenGL implementations to produce identical information logs." Which makes me worry that my code may behave incorrectly on different systems. I don't need them to be identical, I just need them to follow the same format. So is there a better way to get a list of errors with line number separate, is it safe to assume that they'll always follow the "ERROR: 0:123:" format, or is there simply no reliable way to do this? Thanks!

    Read the article

  • Sites with overlapping code-bases. Developing multiple sites with little changes

    - by Web Developer
    I have to develop 3 different sites video.com for hosting video audio.com for hosting audio docs.com for hosting docs. domain names for example only Almost 80% of the functionality is the same for all the three, with remaining 20% being completely different features... How do I handle this? How does sites like SO handle this? I am developing this in YII framework and was thinking of having these different features as modules but in this case the menu/code links in html code can become difficult.

    Read the article

  • How will the search rank get impacted if i move my mobile website to a single page application?

    - by rahul
    I have two different versions of my site. A desktop version and a mobile optimised version. That is for the same url the server renders different html for different user agents. I had been using vary header for this scheme as recommended by Google. However, now i want to move the mobile website to a single page application for several reasons. I want to know if google stops seeing anything on my mobile web version but the desktop version continues to work as it is, then how would the search rank be impacted given that mobile web gets more traffic than the desktop version. How would the vary header come jnto play

    Read the article

  • Asset Discovery Video

    - by Owen Allen
    A while back, I mentioned that we'd started putting together videos that explain some aspects of Ops Center. (The first one I talked about shows you how to create a server pool.) Well, there's another video that I wanted to show you; this one is about discovering assets. There are a few different tools you can use to discover assets in Ops Center, each one appropriate for different types of assets or different environmental needs. Salvador put together this video that walks you through the options in the Add Assets wizard, explaining when each option is used and how to use them: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; We're adding more videos as we go, so if there's something else you'd like to see explained in video form, let me know.

    Read the article

  • Using NPM to share resources between UI projects [on hold]

    - by guy mograbi
    I am a UI team leader. My team has a lot of projects using different languages/technologies. In some parts we will rewrite (gradually - @Ampt this is for you) the application in order to enable new fresh technologies in and get old dinosaurs out. I am going to use Node Package Manager to set up an "all powerful" build/dependency manager. Can I use NPM to depend on a private github repository? Can I use NPM to depend on SVN? Will NPM play nice with quickbuild? Since each project might have a slightly different structure (think jetty/maven or play!framework) can I configure NPM to install some dependencies in different folders while still running it from the project's root? How can I, using NPM, get development resources out and build a packaged product? (like a war) Yes/No - is there a reason to use grunt? No discussion, just one liners.

    Read the article

  • dpkg --set-selections from 11.10 on 12.04LTS

    - by Vixxo
    I have to move a server to a different DC in a different country. The server is currently running 11.10, but the new fresh installed OS will be 12.04. The hardware of both is completely different. In order to install the same packages that the server currently uses, I am thinking of exporting all packages to a list and then run this list on the new machine. I will after that proceed with transferring the configuration files for the LAMP stack. My question is if this will work, or if there will be a problem because of the differences in the versions of the OS? What is the correct way to do it? PS. I have no access to KVM to reload the OS as an image, so this would not be an option.

    Read the article

  • What are some important guidelines when starting a software cooperative?

    - by Roy
    We are a group of people who are about to start a software cooperative, which means all of us (and other future workers) will be the owners of the 'company' rather than having bosses and employees. We do this from ideological reasons but also because we believe this allows many advantages - power of democracy (see SE..) , motivation, creativity, good work relations and atmosphere and more. We do face some questions about how exactly ownership of our products should be split, should we give different percentage for different people which put in a different amount of work hours or brings expert knowledge. We want people to feel they get what they deserve, not more, not less, and we're not sure just splitting it even will give this feeling. What are some good guidelines for solving these questions in a cooperative?

    Read the article

  • Distributed cache and improvement

    - by philipl
    Have this question from interview: Web Service function given x static HashMap map (singleton created) if (!map.containsKey(x)) { perform some function to retrieve result y map.put(x, y); } return y; The interviewer asked general question such as what is wrong with this distributed cache implementation. Then asked how to improve on it, due to distributed servers will have different cached key pairs in the map. There are simple mistakes to be pointed out about synchronization and key object, but what really startled me was that this guy thinks that moving to database implementation solves the problem that different servers will have different map content, i.e., the situation when value x is not on server A but on server B, therefore redundant data has to be retrieved in server A. Does his thinking make any sense? (As I understand this is the basic cons for distributed cache against database model, seems he does not understand it at all) What is the typical solution for the cache growth issue (weak reference?) and sync issue (do not know which server has the key already cached - use load balancing)? Thanks

    Read the article

  • How do you ensure consistent experience across multiple graphics cards (or even driver versions)?

    - by Grigory Javadyan
    So I was writing a simple 2D game with OpenGL and SDL and had this problem when there was awful tearing when running in windowed mode (even though I explicitly asked SDL_SetVideoMode to use double buffering). Didn't worry about it all too much because most of the time the game grabs the entire screen, windowed mode is just for debugging. Anyway, yesterday I updated my nVidia drivers and tearing disappeared, the game runs smooth and looks nice in windowed mode too. I can see how the problem may be in the graphics driver, but this leads to a question. Obviously, professional game developers have to deal with a lot of different hardware/software configurations. What are the techniques they use to make sure the game looks the roughly the same on different graphics cards or even the same model of graphics card, but with different driver versions?

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >