Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 435/1302 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Restoring VM's in Hyper-V from a (somewhat) failed VSS backup...

    - by DeliriumTremens
    I attempted to do a backup of our virtualization server when moving from Hyper-V 2008 to R2. I changed the registry settings to register Hyper-V VSS with Windows Server Backup, and sent the backup on it's way while I went on to other things. Apparently the VSS portion didn't back-up the VM's like I had hoped, and after updating Windows Server and Hyper-V to R2, I was unable to restore the VM's correctly. The backup completed and I have all of the files from before the update backed up, but is there any way to restore the VM's from the backup? The vhd's all seem to have old modified dates, and when bringing the machines up from these vhd's they have very old settings. I have found a bunch of avhd files (named according to GUID), but I'm not sure if I can create a VM with the old vhd and merge the snapshots with it.

    Read the article

  • How setup federated SQL data to display limited information to a Web server in the DMZ?

    - by Pcav
    I have a SQL server behind a firewall. I need to push some limited SQL 2005 information to a Web Server in the DMZ so that I do not have to let database queries come all the way into the database server on our internal network. I want to push a small amount of dynamic data to a Web server in the DMZ and lock it down so that our hosted website does not need to come into the internal network for information; I want to put a server in the DMZ that will be the only connection allowed to the SQL database. This DMZ server will be the only server that can have any sort of connection to the back-end database so the hosting provider just pull the data from our server in the DMZ...

    Read the article

  • Which open source/free CMSs allow for staging content changes before putting live?

    - by elliot100
    I'm not sure that I've phrased the question all that well. What I'm really looking for is a feature of CMSs where content changes are made on a restricted access 'staging/preview' site, before being published to the live external site. The open source/free CMSs I've looked at so far (Textpattern, WordPress, Movable Type) don't seem to allow this, as far as I can see. Although they allow new content to be saved as draft/pending, viewable by users with appropriate privileges, this doesn't work with changes to existing content -- a post/page can't be live and also have a new version pending. (Do correct me if I'm wrong). I realise it should be possible to do this by making all changes on a staging site, and then replicating the contents of that database to a separate live site manually, but am looking for something a little more elegant. Edit: Just to clarify, both systems which involve synchronising a live database with a staging database systems which offer live/staging views of a single database would be of interest. Am sure I have seen both approaches in commercial/proprietary CMSs.

    Read the article

  • Preventing users from deleting SQL data

    - by me2011
    We just purchased a program that requires the users to have an account in the MS SQL server, with read/write access to the program's database. My concern is that since these users will now have write access to the database, they could directly connect to the SQL server outside of the program's client and then mess with the data directly in the tables. Is there anyway I can prevent access to the database while still allowing access via the client program?

    Read the article

  • Is Clonezilla a good option for a daily batch-file-based backup of a Windows XP PC?

    - by rossmcm
    Having just been through the process of rebuilding a Windows XP desktop machine when the disk died, I'm anxious to make it a lot less painful. I didn't lose any data, but reinstalling everything took ages. Clonezilla seems to be a highly mentioned free backup tool. How easy would it be to implement the following: a nightly unattended backup of the desktop's disk image to another network machine (or a second drive in the machine), hopefully with compression. restore from that image using USB boot media. so that if I come in to work and find the hard drive has tanked, it is just a matter of replacing the dead drive with a new one, booting from the USB stick, choosing the image to restore, and then finding something else to do for an hour or two. When it is finished I would hopefully be back to where I was.

    Read the article

  • Oracle backup and recovery

    - by kupa
    During recovery Oracle writes the following error: RMAN-06054: media recovery requesting unknown log: thread 1 seq 9 lowscn 4034762 I have used in mount mode this command: change archivelog all crosscheck; delete expired archivelog all; Then restore and tried to recover again:But still RMAN-06054 error.Than I wrote: run{ SET UNTIL SEQUENCE 9 THREAD 1; RESTORE DATABASE; RECOVER DATABASE; } It helped me to recover database...But after that when I do the backup and then recover the same error occurs and solution is the same... I wonder to solve this problem without SET UNTIL SEQUENCE 9 THREAD 1; maybe I should unregister this archive log from control file(I am using control file not catalog) Can you tell me how?

    Read the article

  • How can I run fsck on a disk image via Mac Terminal?

    - by mvizual
    I want to run fsck on a disk image before I use it to restore (replace) a corrupted volume. Using Terminal, what would be the proper command, syntax, and options for this operation? I've just recently become acquainted with Terminal and line commands, so syntax and specific options aren't part of my computing vocabulary. I'm using Terminal 2.1.2, bash, OS 10.6.8. Ultimately, I'm trying to restore an image to a secondary startup volume (external drive). The image is mounted on my desktop and I want to check it for errors before I use it. Disk Utility runs "repair disk" successfully but the integrity of the image is suspect.

    Read the article

  • How can I select multiple windows to tile in Window 7 (like you could in previous versions)?

    - by Daniel
    I've disabled Aero Snap, is there any way to restore the old method of window arrangement allowing you to do side by side, or top and bottom, etc.? All you had to do before is select the windows you want and right click. I know the menu is still there in Windows 7 but it is only for the whole Task Bar, and you can also do it from the Task Manager but that is more complex to get too. After looking all around I cannot find a way to restore any such right click menu for each application. Is there a way to tile like this for individual windows or groups?

    Read the article

  • Removing extra commas in CSV without another data source

    - by fi-no
    We have a large database with customer addresses that was exported from an SQL database to CSV. In the event that a company has a comma in their name, it (predictably) throws the whole database out of whack. Unfortunately, there are so many instances of this (and commas in the second address line) that the whole CSV (~100k rows) is a huge mess. The obvious fix is to export the data again in a different, non comma reliant format, but access to that SQL database is more or less impossible at the moment... I've tried a few tools and brainstormed about combining things to fix this, but I figured asking couldn't hurt. Thanks!

    Read the article

  • Unistalled C++ Redistributables on Windows Vista

    - by VusP
    Windows Vista Service Pack2 32 bit I uninstalled C++ re-distrubutables (was not aware they are necessary for some applications to run) and I am facing error Cannot start application because the side-by-side configuration is not correct when i start applications like Avast, Revo Uninstaller etc. I came across this page. But the downloaded vcredist_86.exe doesn't seem to do much. It extracts itself and that's it nothing after that. No error nothing. I just finished installing 300+ MB updates on my system, so i don't want to use the System Restore option [And, I don't know which Restore Point to revert to]. Any other way to get this done?

    Read the article

  • I want to replace 120GB SSD with 240GB SSD. Will I need to reinstall Windows?

    - by Borek
    Some SSD vendors offer "upgrade kits" that are supposed to move the operating system from the old disk to the new one without the need to reinstall it, however, it didn't really quite work for me in the past and I always ended up installing the system from scratch. I'd really like to avoid it now so I'd like to ask: Generally speaking, when upgrading from a smaller to a larger SSD, will something like Windows Complete PC Backup and Restore work? Has someone experienced a seamless upgrade? Is there a proven tool to do that? (That worked for you, not that should work theoretically.) My problem usually is that the restored system sees a different disk, thinks it is a different hardware and doesn't want to restore.

    Read the article

  • How to remove GRUB from dual boot (Debian and Windows XP)

    - by All
    I have a computer witch cannot be boot from CD or USB. I have Debian and Windows XP with dual boot via GRUB. Now I want to uninstall Debian and GRUB and restore the Windows MBR. I can enter boot Debian and Windows OSes, but no boot from CD or USB for recovery. How can I remove GRUb and restore MBR from within Windows XP or Debian? NOTE: I asked this question before, but after accepting the answer, I found that Windows XP does not have fdisk command. However, I think it is too late to continue the discussion there; thus, I asked this brand new question.

    Read the article

  • Outlook 2007 starts with minimized window

    - by astro46
    Outlook 2007, starts at boot, minimized: suddenly, as of 2 weeks ago, when I click on the taskbar icon (Outlook already started but minimized) what opens on screen is a tiny "window" the size of the close-minimize-restore buttons. I then have to move it down the screen, and drag the borders to bring it to full size, or, sometimes, clicking on the restore button maximizes the tiny window: then, for days, it starts either the same way or in some other small size, not the same as I left it. I would appreciate suggestions on how to repair this particular example of Outlook not remembering its setting? win7, 64 bit. outlook 2007 32 bit

    Read the article

  • NHibernate MySQL Composite-Key

    - by LnDCobra
    I am trying to create a composite key that mimicks the set of PrimaryKeys in the built in MySQL.DB table. The Db primary key is as follows: Field | Type | Null | ---------------------------------- Host | char(60) | No | Db | char(64) | No | User | char(16) | No | This is my DataBasePrivilege.hbm.xml file <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="TGS.MySQL.DataBaseObjects" namespace="TGS.MySQL.DataBaseObjects"> <class name="TGS.MySQL.DataBaseObjects.DataBasePrivilege,TGS.MySQL.DataBaseObjects" table="db"> <composite-id name="CompositeKey" class="TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey, TGS.MySQL.DataBaseObjects"> <key-property name="Host" column="Host" type="char" length="60" /> <key-property name="DataBase" column="Db" type="char" length="64" /> <key-property name="User" column="User" type="char" length="16" /> </composite-id> </class> </hibernate-mapping> The following are my 2 classes for my composite key: namespace TGS.MySQL.DataBaseObjects { public class DataBasePrivilege { public virtual DataBasePrivilegePrimaryKey CompositeKey { get; set; } } public class DataBasePrivilegePrimaryKey { public string Host { get; set; } public string DataBase { get; set; } public string User { get; set; } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; if (obj.GetType() != typeof (DataBasePrivilegePrimaryKey)) return false; return Equals((DataBasePrivilegePrimaryKey) obj); } public bool Equals(DataBasePrivilegePrimaryKey other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return Equals(other.Host, Host) && Equals(other.DataBase, DataBase) && Equals(other.User, User); } public override int GetHashCode() { unchecked { int result = (Host != null ? Host.GetHashCode() : 0); result = (result*397) ^ (DataBase != null ? DataBase.GetHashCode() : 0); result = (result*397) ^ (User != null ? User.GetHashCode() : 0); return result; } } } } And the following is the exception I am getting: Execute System.InvalidCastException: Unable to cast object of type 'System.Object[]' to type 'TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey'. at (Object , GetterCallback ) at NHibernate.Bytecode.Lightweight.AccessOptimizer.GetPropertyValues(Object target) at NHibernate.Tuple.Component.PocoComponentTuplizer.GetPropertyValues(Object component) at NHibernate.Type.ComponentType.GetPropertyValues(Object component, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode, ISessionFactoryImplementor factory) at NHibernate.Engine.EntityKey.GenerateHashCode() at NHibernate.Engine.EntityKey..ctor(Object identifier, String rootEntityName, String entityName, IType identifierType, Boolean batchLoadable, ISessionFactoryImplementor factory, EntityMode entityMode) at NHibernate.Engine.EntityKey..ctor(Object id, IEntityPersister persister, EntityMode entityMode) at NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.Get(String entityName, Object id) at NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id) at NHibernate.Impl.SessionImpl.Get[T](Object id) at TGS.MySQL.DataBase.DataProvider.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.MySQL.DataBase\DataProvider.cs:line 20 at TGS.UserAccountControl.UserAccountManager.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControl\UserAccountManager.cs:line 10 at TGS.UserAccountControlTest.UserAccountManagerTest.CanGetDataBasePrivilegeByHostDbUser() in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControlTest\UserAccountManagerTest.cs:line 12 I am new to NHibernate and any help would be appreciated. I just can't see where it is getting the object[] from? Is the composite key supposed to be object[]?

    Read the article

  • Why do I get a blank page from my Perl CGI script?

    - by Jason
    The user enters a product code, price and name using a form. The script then either adds it to the database or deletes it from the database. If the user is trying to delete a product that is not in the database they get a error message. Upon successful adding or deleting they also get a message. However, when I test it I just get a blank page. Perl doesnt come up with any warnings, syntax errors or anything; says everything is fine, but I still just get a blank page. The script: #!/usr/bin/perl #c09ex5.cgi - saves data to and removes data from a database print "Content-type: text/html\n\n"; use CGI qw(:standard); use SDBM_File; use Fcntl; use strict; #declare variables my ($code, $name, $price, $button, $codes, $names, $prices); #assign values to variables $code = param('Code'); $name = param('Name'); $price = param('Price'); $button = param('Button'); ($code, $name, $price) = format_input(); ($codes, $names, $prices) = ($code, $name, $price); if ($button eq "Save") { add(); } elsif ($button eq "Delete") { remove(); } exit; sub format_input { $codes =~ s/^ +//; $codes =~ s/ +$//; $codes =~ tr/a-z/A-Z/; $codes =~ tr/ //d; $names =~ s/^ +//; $names =~ s/ +$//; $names =~ tr/ //d; $names = uc($names); $prices =~ s/^ +//; $prices =~ s/ +$//; $prices =~ tr/ //d; $prices =~ tr/$//d; } sub add { #declare variable my %candles; #open database, format and add record, close database tie(%candles, "SDBM_File", "candlelist", O_CREAT|O_RDWR, 0666) or die "Error opening candlelist. $!, stopped"; format_vars(); $candles{$codes} = "$names,$prices"; untie(%candles); #create web page print "<HTML>\n"; print "<HEAD><TITLE>Candles Unlimited</TITLE></HEAD>\n"; print "<BODY>\n"; print "<FONT SIZE=4>Thank you, the following product has been added.<BR>\n"; print "Candle: $codes $names $prices</FONT>\n"; print "</BODY></HTML>\n"; } #end add sub remove { #declare variables my (%candles, $msg); tie(%candles, "SDBM_File", "candlelist", O_RDWR, 0) or die "Error opening candlelist. $!, stopped"; format_vars(); #determine if the product is listed if (exists($candles{$codes})) { delete($candles{$codes}); $msg = "The candle $codes $names $prices has been removed."; } else { $msg = "The product you entered is not in the database"; } #close database untie(%candles); #create web page print "<HTML>\n"; print "<HEAD><TITLE>Candles Unlimited</TITLE></HEAD>\n"; print "<BODY>\n"; print "<H1>Candles Unlimited</H1>\n"; print "$msg\n"; print "</BODY></HTML>\n"; }

    Read the article

  • Are all <canvas> tag dimensions in pixels?

    - by Simon Omega
    Are all tag dimensions in pixels? I am asking because I understood them to be. But my math is broken or I am just not grasping something here. I have been doing python mostly and just jumped back into Java Scripting. If I am just doing something stupid let me know. For a game I am writing, I wanted to have a blocky gradient. I have the following: HTML <canvas id="heir"></canvas> CSS @media screen { body { font-size: 12pt } /* Game Rendering Space */ canvas { width: 640px; height: 480px; border-style: solid; border-width: 1px; } } JavaScript (Shortened) function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 480; i = i + 10 ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, i, 640, 10); myblue = myblue - 2; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } function main () { var targetcontext = document.getElementById(“main”).getContext("2d"); testDraw(targetcontext); } To me this should produce a series of 640w by 10h pixel bars. In Google Chrome and Fire Fox I get 15 bars. To me that means ( 480 / 15 ) is 32 pixel high bars. So I change the code to: function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 16; i++ ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, (i * 10), 640, 10); myblue = myblue - 10; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } And get a true 32 pixel height result for comparison. Other than the fact that the first code snippet has shades of blue rendering in non-visible portions of the they are measuring 32 pixels. Now back to the Original Java Code... If I inspect the tag in Chrome it reports 640 x 480. If I inspect it in Fire Fox it reports 640 x 480. BUT! Fire Fox exports the original code to png at 300 x 150 (which is 15 rows of 10). Is it some how being resized to 640 x 480 by the CSS instead of being set to a true 640 x 480? Why, how, what? O_o I confused...

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • How to install Grub2 under several common scenarios

    - by Huckle
    I feel the community has long needed a clean guide on how to install Grub2 under a a few extremely common scenarios. I will accept answer as solved when it has one section per scenario and assumes nothing other than what is specified. Please add to the existing answer, wiki style, keeping to the original assumptions. Rules: 1. You cannot, at any point in the answer, invoke Ubiquity (the Ubuntu installer). 2. I strongly recommend not using any automatic boor-repair tools as they're not very educational Scenario 1: Non-booting Linux OS, No boot partition, Fix from Live CD Setup: /dev/sda1 is formatted ext* /dev/sda2 is formatted linux_swap /dev/sda1 doesn't boot because MBR is scrambled and /boot/* was erased Explain: How to boot to a Live CD / USB and restore Grub2 to the MBR and /boot of /dev/sda1 Scenario 2: Non-booting Linux OS, Boot partition, Fix from Live CD Setup: /dev/sda1 is formatted fat /dev/sda2 is formatted ext* /dev/sda3 is formatted linux_swap /dev/sda2 doesn't boot because the MBR is scrambled and /dev/sda1 was formatted Explain: How to boot to a Live CD / USB and restore Grub2 to the MBR and /dev/sda1 and then update the fstab on /dev/sda2 Scenario 3: Install on to thumb drive, Booting various OSes, From Linux OS Setup: /dev/sdb is removable media /dev/sdb1 is formatted fat /dev/sdb2 is formatted ext* /dev/sdb3 is formatted fat The MBR of /dev/sdb is otherwise not initialized You are executing from a Linux based OS installed on /dev/sda Explain: How to install Grub2 on to /dev/sdb1, mark /dev/sdb1 active, be able to chose between /dev/sdb2 and /dev/sdb3 on boot. Scenario 4: (Bonus) Install on to thumb drive, Booting ISO, From Linux OS Setup: /dev/sdb is removable media /dev/sdb1 is formatted fat /dev/sdb1 contains /iso/live.iso /dev/sdb2 is formatted ext* /dev/sdb3 is formatted fat The MBR of /dev/sdb is otherwise not initialized You are executing from a Linux based OS installed on /dev/sda Explain: How to install Grub2 on to /dev/sdb1, mark /dev/sdb1 active, be able to chose between /dev/sdb2, /dev/sdb3, and /iso/live.iso on boot.

    Read the article

  • How many different servers are needed to keep a website running with no downtime? [closed]

    - by Mason Wheeler
    Machines go down. It's a fact of life. They may need to be rebooted for some reason, or they may have a hardware failure, or a power outage. So if I wanted to deploy a website with a server backed by a SQL database, putting the whole thing on one server wouldn't be good enough. It obviously needs at least two servers, so that if one goes down, the other can pick up the slack until the first comes back up. Of course, if I have the server software on two machines, either one of which could go down, I can't place the database on either of those two machines, because it could go down. So the database needs its own server. But that server can go down, so I need a backup database server and some sort of replication system to keep it in sync so the main can fail-over to it. So far, that's a bare minimum of 4 machines to keep one website running with a reasonable chance of no downtime. (Assuming no catastrophic events take place that take down both front-end servers at once or both DB servers at once, and no hacks, DDOS attacks, etc. Am I missing any other factors, or should I consider 4 servers to be the minimum for running a website with a goal of continuing operation without downtime even when a server goes down?

    Read the article

  • SQL Sharding and SQL Azure&hellip;

    - by Dave Noderer
    Herve Roggero has just published a paper that outlines patterns for scaling using SQL Azure and the Blue Syntax (he and Scott Klein’s company) sharding api. You can find the paper at: http://www.bluesyntax.net/files/EnzoFramework.pdf Herve and Scott have also just released an Apress book Pro SQL Azure. The idea of being able to split (shard) database operations automatically and control them from a web based management console is very appealing. These ideas have been talked about for a long time and implemented in thousands of very custom ways that have been costly, complicated and fragile. Now, there is light at the end of the tunnel. Scaling database access will become easier and move into the mainstream of application development. The main cost is using an api whenever accessing the database. The api will direct the query to the correct database(s) which may be located locally or in the cloud. It is inevitable that the api will change in the future, perhaps incorporated into a Microsoft offering. Even if this is the case, your application has now been architected to utilize these patterns and details of the actual api will be less important. Herve does a great job of laying out the concepts which every developer and architect should be familiar with!

    Read the article

  • Installation procedure RAC One Node

    - by rene.kundersma
    Okay, In order to test RAC One Node, on my Oracle VM Laptop, I just: - installed Oracle VM 2.2 - Created two OEL 5.3 images The two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics. After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output: This patch has the scripts required to administer RAC One Node, you will see them later. At the moment we have them available for Linux and Solaris. After installation of the patch, I created a RAC database with an instance on one node. Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters: When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service: The service configuration details are: After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes: After initialization, when you would run raconeinit again, you would see: So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) . Omotion is started by running Omotion. With Omotion -v you get verbose output: So, during the migration you will see the two instance active: And, after the migration, there is only one instance left on the new node:

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >