Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 38/548 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How should programmers handle email-username identity theft?

    - by Craige
    Background I recently signed up for an iTunes account, and found that somebody had fraudulently used MY email to register their iTunes account. Why Apple did not validate the email address, I will never know. Now I am told that I cannot use my email address to register a new iTunes account, as this email address is linked to an existing account. This got be thinking... Question How should we as developers handle email/identity theft? Obviously, we should verify that an email address belongs to the person it is said to belong to. Why Apple did not do this in my case, I have no idea. But lets pretend we use email address for login/account identification, and something slipped though the cracks (be it our end, or the users). How should we handle reports of fraudulent accounts?

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • Simple vs Complex (but performance efficient) solution - which one to choose and when?

    - by ManojGumber
    I have been programming for a couple of years and have often found myself at a dilemma. There are two solutions - one is simple one i.e. simple approach, easier to understand and maintain. It involves some redundancy, some extra work (extra IO, extra processing) and therefore is not the most optimal solution. but other uses a complex approach,difficult to implement, often involving interaction between lot of modules and is a performance efficient solution. Which solution should I strive for when I do not have hard performance SLA to meet and even the simple solution can meet the performance SLA? I have felt disdain among my fellow developers for simple solution. Is it good practice to come up with most optimal complex solution if your performance SLA can be met by a simple solution?

    Read the article

  • Best practice settings Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice settings effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply() is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply() is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply() is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of Set-operation like: private Matrix _world; public Matrix World { get{ return _world;} set { if(value == world)return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } Thanking you in anticipation

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • How you return to a code when you don't remember what you were doing?

    - by speeder
    Well, I have some problems with procrastination and whatnot, but those get infinitely worse, when I cannot remember what I should be doing. I mean, I know my project, I wrote 100% of the code so far, and I knew more or less what I was doing, but I don't remember exactly what, I don't remember what file I was editing and why. How I get back on track? (because right now my technique of opening the source code and staring at it is not working)

    Read the article

  • how to programtically build a grid of interlocking but random sized squares

    - by Mrwolfy
    I want to create a two dimensional layout of rectangular shapes, a grid made up of random sized cubes. The cubed should fit together and have equal padding or margin (space between). Kind of like a comic book layout, or more like the image attached. How could I do this procedurally? Practically, I would probably be using Python and some graphic software to render an image, but I don't know the type of algorithm (or whatnot) I would need to use to generate the randomized grid.

    Read the article

  • How to write good code with new stuff?

    - by Reza M.
    I always try to write easily readable code that is well structured. I face a particular problem when I am messing around with something new. I keep changing the code, structure and so many other things. In the end, I look at the code and am annoyed at how complicated it became when I was trying to do something so simple. Once I've completed something, I refactor it heavily so that it's cleaner. This occurs after completion most of the time and it is annoying because the bigger the code the more annoying it is the rewrite it. I am curious to know how people deal with such agony, especially on big projects shared between many people ?

    Read the article

  • What is the best approach for inline code comments?

    - by d1egoaz
    We are doing some refactoring to a 20 years old legacy codebase, and I'm having a discussion with my colleague about the comments format in the code (plsql, java). There is no a default format for comments, but in most cases people do something like this in the comment: // date (year, year-month, yyyy-mm-dd, dd/mm/yyyy), (author id, author name, author nickname) and comment the proposed format for future and past comments that I want is: // {yyyy-mm-dd}, unique_author_company_id, comment My colleague says that we only need the comment, and must reformat all past and future comments to this format: // comment My arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). The code is living, and for that reason has a history. Because without the change dates it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. because the author is very important, a change of authors is more credible than a change of authory Agility reasons, no need to open and navigate through the SCM tool people would be more afraid to change something that someone did 15 years ago, than something that was recently created or changed. etc. My colleague's arguments: The history is in the SCM Developers must not be aware of the history of the code directly in the code Packages gets 15k lines long and unstructured comments make these packages harder to understand What do you think is the best approach? Or do you have a better approach to solve this problem?

    Read the article

  • How to learn programming for a medium scale project form a beginner? [closed]

    - by Lin Xiangyu
    I study programming by myself.I have learn servel programming languages. but I never write a project more than 1000 lines. I know the best way to improve programming skills is practise. The problem is many books, just talk about the programming language, or talk about build a project from a high level. Fews of books will teach how to build a middle scale project. For example, I want to build a simple HTTP Server(Nor like Apache or just a simple listenr to a port), a Markdown Parser, or a download tools just like emule or wget. I don't know what to do. I may found peaces of code in the web, or found familiar project in the Github. I don't know how to read the code. I want to some tutorial that can told me how to build the project step by step, teacher me how to write thousands lines of code. Any suggest?

    Read the article

  • Design patterns to avoiding breaking the SRP while performing heavy data logging

    - by Kazark
    A class that performs both computations and data logging seems to have at least two responsibilities. Given a system for which the specifications require heavy data logging, what kind of design patterns or architectural patterns can be used to avoid bloating all the classes with logging calls every time they compute something? The decorator pattern be used (e.g. Interpolator decorated to LoggingInterpolator), but it seems that would result in a situation hardly more desirable in which almost every major class would need to be decorated with logging.

    Read the article

  • What does your Technical Documentation look like?

    - by Rachel
    I'm working on a large project and I would like to put together some technical documentation for other members of the team and for new programmers joining the project. What sort of documentation should I have? Just /// code comments or some other file(s) explaining the architechure and class design? I've never really done documentation except the occasional word doc to go with smaller apps, and I think this project is too large to doc in a single word file.

    Read the article

  • Kubuntu 11.04 bad experience, Gimp crashes and other bugs

    - by giowck
    I installed Kubuntu 11.04 64bit today. First let me expose some issues, this is not a rant, just some observations on Kubuntu 11.04 :) Flash doesn't work. The package "flashplugin-installer" was installed by rekonq. But still no flash. Additional Drivers (jockey-kde) doesn't show up automatically after first login. I tried to delete a large file (19,2 GB), an error appeared: "Please empty the trash first, no more space available" or something like that, but you know what? My trash was empty! (Then I found in Dolphin the settings to increment the max size of the trash). Amarok has no play/pause/forward/back buttons, only after changing settings to show those buttons on the top bar. GTK applications like ubuntu software center, inkscape... are in english, but my default language is "german". Only gimp allows to install the "de" language pack. The numblock turns off after each restart. In the KDE Control Center, the "remember last state" of numblock is set. But this is not working. I had some crashes: Nepomuk, Policy-kit, plasma-workspace and gimp. But now to the important stuff. I really need Gimp to work. After starting it, I get a crash: giowck@giowck-desktop:~$ gimp (gimp:1899): GLib-WARNING **: /build/buildd/glib2.0-2.28.6/./glib/goption.c:2132: ignoring no-arg, optional-arg or filename flags (8) on option of type 0 Speicherzugriffsfehler (in English: "Memory error") What can I do to run Gimp? Thanks

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Low coupling and tight cohesion

    - by hidayat
    Of course it depends on the situation. But when a lower lever object or system communicate with an higher level system, should callbacks or events be preferred to keeping a pointer to higher level object? For example, we have a world class that has a member variable vector<monster> monsters. When the monster class is going to communicate with the world class, should I prefer using a callback function then or should I have a pointer to the world class inside the monster class?

    Read the article

  • Mistaken dist-upgrade, is this bad?

    - by SpashHit
    I was looking for Update Manager on Ubuntu 10.10 Netbook Edition and couldn't find it, so in Terminal I did: sudo apt-get update sudo apt-get upgrade I got a message saying some packages were "held back" and searching online gave me the suggestion to do: sudo apt-get dist-upgrade So I did that and it updated my kernel and now uname -a says I have 2.6.35-23-generic #40-Ubuntu SMP. My system is still working normally, but I'm now second-guessing if I did the right thing. Was this kernel update meant for the next version of Ubuntu? Should I try to back it out?

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • Get entities ids from two similar collections using one method

    - by Patryk Roszczyniala
    I've got two lists: List<Integer, ZooEntity> zoos; List<Integer, List<ZooEntity>> groupOfZoos; These operations will return collections of values: Collection<ZooEntity> cz = zoos.values(); Collection<List<ZooEntity>> czList = groupOfZoos.values(); What I want to achieve is to get list of all zoo ids. List<Integer> zooIds = cz ids + czList ids; Of course I can create two methods to do what I want: public List<Integer> getIdsFromFlatList(Collection<ZooEntity> list) { List<Integer> ids = new ArrayList<Integer>(); for (ZooEntity z : list) { ids.add(z.getId()); } return ids; } public List<Integer> getIdsFromNestedList(Collection<List<ZooEntity>> list) { List<Integer> ids = new ArrayList<Integer>(); for (List<ZooEntity> zList : list) { for (ZooEntity z : zList) { ids.add(z.getId()); } } return ids; } As you can see those two methods are very similar and here is my question: Is it good to create one method (for example using generics) which will get ids from those two lists (zoos and groupOfZoos). If yes how it should look like? If no what is the best solution? BTW. This is only the example. I've got very similar problem at job and I want to do it in preety way (I can't change enities, I can change only getIds...() methods).

    Read the article

  • Is catching general exceptions really a bad thing?

    - by Bob Horn
    I typically agree with most code analysis warnings, and I try to adhere to them. However, I'm having a harder time with this one: CA1031: Do not catch general exception types I understand the rationale for this rule. But, in practice, if I want to take the same action regardless of the exception thrown, why would I handle each one specifically? Furthermore, if I handle specific exceptions, what if the code I'm calling changes to throw a new exception in the future? Now I have to change my code to handle that new exception. Whereas if I simply caught Exception my code doesn't have to change. For example, if Foo calls Bar, and Foo needs to stop processing regardless of the type of exception thrown by Bar, is there any advantage in being specific about the type of exception I'm catching?

    Read the article

  • AdventureWorks 2014 Sample Databases Are Now Available

    - by aspiringgeek
      Where in the World is AdventureWorks? Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love. I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map.  Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB.  What Success Looks Like In my correspondence with the team, here’s how I defined success: 1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including:  In-Memory OLTP (aka Hekaton) Clustered Columnstore Online Operations Resource Governor IO 2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB) 3. Documentation to support experimenting with these features As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.”  Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so. Download AdventureWorks 2014 Download your copies of SQL Server 2014 AdventureWorks sample databases here.

    Read the article

  • Should my colleagues review each others code from source control system?

    - by Daniel Excinsky
    Hi everybody. So that's my story: one of my colleagues uses to review all the code, hosted to revision system. I'm not speaking about adequate review of changes in parts that he belongs to. He watches the code file to file, line to line. Every new file and every modified. I feel just like being spied on! My guess is that if code was already hosted to control system, you should trust it as workable at least. My question is, maybe I'm just too paranoiac and practice of reviewing each others code is good? P.S: We're team of only three developers, and I fear that if there will be more of us, colleague just won't have time to review all the the code we'll write.

    Read the article

  • Epsilon : An Oracle Customer Profile

    - by Anand Akela
    ZDNet published an article today based on the interview of Jeff White, vice president, technology, strategic database services at Epsilon. Jeff discussed Oracle Exadata Database Machine and Oracle Enterprise Manager with the ZDNet writer Dan Kusnetzky . Read the article  Epsilon : An Oracle Customer Profile . Jeff White, Epsilon VP, was honored with Oracle’s Data Warehouse Leader of the Year for Innovative Data Warehouse Deployment of Oracle Exadata and Oracle Enterprise Manager earlier this year. In one of the videos earlier this year, Jeff mentioned that Epsilon has streamlined IT administration, monitoring, and engineered systems maintenance with Oracle Enterprise Manager. Having gained in operational efficiencies, Epsilon is now providing greater efficiencies to its customers. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Best way to indicate more results available

    - by Alex Stangl
    We have a service to return messages. We want to limit the number returned, either allowing the caller to specify the max number to return, or else to use an internal hard limit. We also have thought it would be nice to include in the response whether more messages are available. The "best" way to go about this is not clear. Here are some ideas so far: Only set the "more messages" indicator if the user did not specify a max limit, and the internal max limit was hit. Same as #1 except that "more messages" indicator set regardless of whether the internal hard limit is hit, or the user-specified limit is hit. Same as #1 (or #2) except that we internally read limit + 1 records, but only return limit records, so we know "for sure" there is at least one additional message rather than "maybe" there are additional messages. Do away with the "more messages" flag, as it is confusing and unnecessary. Instead force the user to keep calling the API until it returns no messages. Change "more messages" indicator to something more akin to an EOF indicator, only set when the last message is known to have been retrieved and returned. What do you think is the best solution? (Doesn't have to be one of the above choices.) I searched and couldn't find a similar question already asked. Hopefully this is not "too subjective".

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How to recover from finite-state-machine breakdown?

    - by Earl Grey
    My question may seems very scientific but I think it's a common problem and seasoned developers and programmers hopefully will have some advice to avoid the problem I mention in title. Btw., what I describe bellow is a real problem I am trying to proactively solve in my iOS project, I want to avoid it at all cost. By finite state machine I mean this I have a UI with a few buttons, several session states relevant to that UI and what this UI represents, I have some data which values are partly displayed in the UI, I receive and handle some external triggers (represented by callbacks from sensors). I made state diagrams to better map the relevant scenarios that are desirable and alowable in that UI and application. As I slowly implement the code, the app starts to behave more and more like it should. However, I am not very confident that it is robust enough. My doubts come from watching my own thinking and implementation process as it goes. I was confident that I had everything covered, but it was enough to make a few brute tests in the UI and I quickly realized that there are still gaps in the behavior ..I patched them. However, as each component depends and behaves based on input from some other component, a certain input from user or some external source trigers a chain of events, state changes..etc. I have several components and each behave like this Trigger received on input - trigger and its sender analyzed - output something (a message, a state change) based on analysis The problem is, this is not completely selfcontained, and my components (a database item, a session state, some button's state)...COULD be changed, influenced, deleted, or otherwise modified, outside the scope of the event-chain or desirable scenario. (phone crashes, battery is empty phone turn of suddenly) This will introduce a nonvalid situation into the system, from which the system potentially COULD NOT BE ABLE to recover. I see this (althought people do not realize this is the problem) in many of my competitors apps that are on apple store, customers write things like this "I added three documents, and after going there and there, i cannot open them, even if a see them." or "I recorded videos everyday, but after recording a too log video, I cannot turn of captions on them.., and the button for captions doesn't work".. These are just shortened examples, customers often describe it in more detail..from the descriptions and behavior described in them, I assume that the particular app has a FSM breakdown. So the ultimate question is how can I avoid this, and how to protect the system from blocking itself? EDIT I am talking in the context of one viewcontroller's view on the phone, I mean one part of the application. I Understand the MVC pattern, I have separate modules for distinct functionality..everything I describe is relevant to one canvas on the UI.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >