Search Results

Search found 416 results on 17 pages for 'redundancy'.

Page 13/17 | < Previous Page | 9 10 11 12 13 14 15 16 17  | Next Page >

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • WSUS registry file: NoAutoRebootWithLoggedOnUsers entry being ignored

    - by the_pete
    We are using a registry entry to connect our internal workstations to our WSUS server and everything seems to be working except the NoAutoRebootWithLoggedOnUsers entry. Without fail, over the last few weeks, our lab setup as well as our users have been prompted to restart their machines with a 15 minute time out and there's nothing they can do about it. They can't postpone or cancel the restart, all options in the prompt are greyed out. Below is the registry file we are using to connect our workstations to our WSUS server: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "AcceptTrustedPublisherCerts"=dword:00000001 "ElevateNonAdmins"=dword:00000000 "WUServer"="http://xxx.xxx.xxx.xxx:8530" "WUStatusServer"="http://xxx.xxx.xxx.xxx:8530" [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU] "AUOptions"=dword:00000004 "AutoInstallMinorUpdates"=dword:00000001 "DetectionFrequencyEnabled"=dword:00000001 "DetectionFrequency"=dword:00000002 "NoAutoUpdate"=dword:00000000 "NoAutoRebootWithLoggedOnUsers"=dword:00000001 "RebootRelaunchTimeout"=dword:00000030 "RebootRelaunchTimeoutEnabled"=dword:00000001 "RescheduleWaitTime"=dword:00000020 "RescheduleWaitTimeEnabled"=dword:00000001 "ScheduledInstallDay"=dword:00000000 "ScheduledInstallTime"=dword:00000003 "UseWUServer"=dword:00000001 There is a bit of redundancy, if you want to call it that, having both the NoAutoRebootWithLoggedOnUsers entry as well as the entries for RebootRelaunchTimeout but we wanted to see if we could either disable the restart, or give our users a larger window within which they could wrap up their work, etc. before restarting. Neither of these entries seems to work, but our priority is getting NoAutoRebootWithLoggedOnUsers working and any help with this would be greatly appreciated.

    Read the article

  • Is there a Mac utility that does low level drive integrity check and repair?

    - by Puzzled Late at Night
    The PGP Whole Disk Encryption for Mac OS X Quick Start User Guide version 10.0 contains the following remarks: PGP Corporation deliberately takes a conservative stance when encrypting drives, to prevent loss of data. It is not uncommon to encounter Cyclic Redundancy Check (CRC) errors while encrypting a hard disk. If PGP WDE encounters a hard drive with bad sectors, PGP WDE will, by default, pause the encryption process. This pause allows you to remedy the problem before continuing with the encryption process, thus avoiding potential disk corruption and lost data. To avoid disruption during encryption, PGP Corporation recommends that you start with a healthy disk by correcting any disk errors prior to encrypting. and As a best practice, before you attempt to use PGP WDE, use a third-party scan disk utility that has the ability to perform a low-level integrity check and repair any inconsistencies with the drive that could lead to CRC errors. These software applications can correct errors that would otherwise disrupt encryption. The PGP WDE Windows user guide suggests SpinRite or Norton Disk Doctor. What recourse do I have on the Mac?

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • Exchange 2010 mail routing with Hub Transport in multiple sites

    - by jmreicha
    I have two separate physical sites, Site A and Site B. In site A, I have following: 2 CAS servers 2 Hub Transport servers 2 Mailbox servers 2 Edge servers In site B, I have the following: 1 CAS 1 Hub 1 Mailbox 1 Edge Currently everything is working out of site A. That is, all users are housed on mailboxes that are in site A and all inbound mail flow is pointing to site A. I would eventually like to be able to move some of the mailboxes to site B without causing a disruption for resliency and redundancy purposes but I am not quite sure how to go about setting this up or if it is even possible. So far I have created an Edge subscription in site B and am able to send emails out from test accounts set up with mailboxes on the site B Mailbox server. However, I am unable to receive incoming mail messages and am confused. So I'm thinking incoming mail messages are still being directed to site A and then they are getting stuck because there is no way to route the mail to the site B mailboxes. Is this assumption correct? I am unfamiliar with mail flow and routing so I am not really sure what I need to be looking at? Would I add the site B hub transport to the Edge subscription in site A? Or I guess more specifically, how would I go about enabling communication and mail flow between mailboxes split up on site A and B?

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • Expand a volume residing on one X-RAID disk installed on a Netgear ReadyNas Duo v2

    - by Sid
    I've got a Netgear ReadyNas Duo v2 (2 disk slots). System is configured with X-RAID which does not provide flexibility but automatically expands based on a sort of RAID-5 logic. I had 2 500 GB hard disk installed, redundant, so I had 500 GB of volume size. I wanted to upgrade the whole system to 3 GB * 2 hard disk maintaining both the data already on the NAS and the data on one of the two 3 TB hard disks. So I did this: Unplugged one disk from the ReadyNas. Now the readynas has 1*500 GB non redundant. Plugged one empty 3 TB hard disk. Now the readynas has 1*500 GB + 1*3 TB, redundant. I waited for the resync. I then unplugged the 500 GB hard disk, so that I have only the 3 TB hard disk with the previous data. Now what I want is to copy the data on my other 3 TB hard disk in the NAS, so that I can plug this other disk in the NAS and use it for redundancy. The problem is that: the NAS has the (single) 3 TB hard disk in X-RAID, but the volume does not expand to 3 TB, it remains fixed to 500 GB. Is there a way to tell the ReadyNas to force expanding the volume to the whole disk without plugging in another hard disk of the same size?

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • Review: ComponentOne Studio for Entity Framework

    - by Tim Murphy
    While I have always been a fan of libraries that improve coding efficiency and reduce code redundancy I have mostly been using ones that were in the public domain.  As part of the Geeks With Blogs Influencers program a got my hands on ComponentOne’s Studio for Entity Framework.  Below are my thought after working with the product for several weeks. My coding preference has always been maintainable code that is reusable across an enterprises protfolio.  Because of this my focus in reviewing this product is less on the RAD components and more on its benefits for layered applications using code first Entity Framework. Before we get into the pros and cons here is a summary of the main feature listed for SEF. Unified Data Context Virtual Data Access More Powerful Data Binding Pros The first thing that I found to my liking is the C1DataSource. It basically manages a cache for your Entity Model context.  Under RAD conditions this is setup automatically when you drop the object on a your design surface.  If you are like me and want to abstract you data management into a library it takes a little more work, but it is still acceptable and gains the same benefits. The second feature that I found beneficial is the definition of views with improved sorting and filtering.  Again the ease of use of these features is greater on the RAD side but no capabilities are missing when manipulating object in code. Linq has become my friend over the last couple of years and it was great to see that ComponentOne had ensured that it remained a first class citizen in their design.  When you look into this product yourself I would suggest taking a dive into LiveLinq which allow the joining of different data source types. As I went through discovering the features of this framework I appreciated the number of examples that they supplied for different uses.  Besides showing how to use SEF with WinForms, WPF and Silverlight they also showed how to accomplish tasks both RAD, code only and MVVM approaches. Cons The only area that I would really like to see improvement is in there level of detail in their documentation.  Specifically I would like to have seen some of the supporting code explained, such as what some supporting object did, in the examples instead of having to go to the programmer’s reference. I did find some times where currently existing projects had some trouble determining scope that the RAD controls were allowed, but I expect this is something that is in part end user related. Summary Overall I found the Studio for Entity Framework capable and well thought out.  If you are already using the Entity Framework this product will fit into your environment with little effort in return for greater flexibility and greater robustness in your solutions. Whether the $895 list price for a standard version works for you will depend on your return on investment. Smaller companies with only a small number of projects may not be able to stomach it, you get a full featured product that is supported by a well established company.  The more projects and the more code you have the greater your return on investment will be. Personally I intend to apply this product to some production systems and will probably have some tips and tricks in the future. del.icio.us Tags: ComponentOne,Studio for Entity Framework,Geeks With Blogs,Influencers,Product Reviews

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • How to create a Global Rule that stores a document’s folder path in a custom metadata field

    - by Nicolas Montoya
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} How to create a Global Rule that stores a document’s folder path in a custom metadata field Efficiency purists would argue that redundancy is not necessary. In real life, we are willing to pay a price for performance –i.e. to have information at our fingertips. We have run into customers opting to store a document folder path as a document metadata field. They have their reasons, half of the ECM community will agree with them, and the other half would raise an eye brow. In the end, they are getting creative to achieve their document management goals. The below steps outlines how to create a Global Rule that would store a document’s folder path in a custom metadata field: Create a Global Rule via Configuration Manager > Rules Tab > Add Then check “Is global rule with priority”. Then check “Use rule activation condition”. The go to “Edit” and check the actions for this Script Properties: Then click OK, and the following rule activation condition will appear: Then Goto to the Fields Tab and add a Rule Field: Select the target Custom Metadata Field and click Ok, then check the “Is derived field”, then “Edit”, then go to the Custom Tab in the Script Properties window and enter the below custom script: <$if #active.dCollectionPath$> <$dprDerivedValue=#active.dCollectionPath$> <$else$> <$dprDerivedValue=#active.xCollectionIDPath$> <$endif$> For more information on the dCollectionPath property, check Section 8.2 Folder Services from the Oracle® Fusion Middleware Services Reference Guide for Oracle Universal Content Management 11g Release 1 (11.1.1) http://docs.oracle.com/cd/E21043_01/doc.1111/e11011/c08_folders002.htm The above rule will keep the Custom Metadata Field updated with the Folder Path information when a document is checked in via the Content Server (CS) Web Interface or the Desktop Integration Suite (DIS).

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • Xsigo and Oracle's Storage

    - by Philippe Deverchère
    Xsigo, a virtual network infrastructure provider, has recently been acquired by Oracle. Following this acquisition, one might ask ourselves why it is important to Oracle and how Oracle's storage is going to benefit on the long term from this virtualized infrastructure layer. Well, the first thing to understand is that Virtual Networking addresses both network and storage connectivity. Oracle Virtual Networking, as the Xsigo technology is now called, connects any server to any network and storage, so this is not just about connecting servers to the Internet or Intranet. It is also for a large part connecting servers to NAS and SAN storage. Connecting servers to storage has become increasingly complex in the past few years because of the strong emergence of virtualization at the Operating System level. 50% of enterprise workloads are now virtualized, up from 18% in 2009, resulting in a strong consolidation of various applications in a high density server footprint. At the same time, server I/O capability increased 8x in the last 8 years. All this has pushed IT administrators to multiply the number of I/O connections in the back-end of their physical servers, resulting in a messy and very hard to manage networking infrastructure. Here is a typical view of a rack back-end when no virtual networking is used. We consider that today: - 75% of users have ten or more Ethernet ports per server - 85% of users have two or more SAN ports per server - 58% have had to add connectivity to a server specifically for VMs - 65% consider cable reduction a priority The average is 12 or more ports per server, resulting in an extremely complex infrastructure to manage. What Oracle wants to achieve with its Oracle Virtual Networking offering is pretty simple. The objective is to eliminate the complexity through a dramatic reduction of cabling between servers and storage/networks. It is also to provide a software based management system so that any server can be connected to any network or any storage, on demand, and without physical intervention on the infrastructure. At the end of the day, the picture on the left shows what one wants to get for the back-end of customer's racks: just a couple of connections on each physical server to provide a simple, agile and fast network infrastructure for both storage and networking access. This is exactly what the Oracle Virtual Networking solution does. It transforms a complex, error-prone, difficult to manage and expensive networking infrastructure into a simple, high performance and agile solution for the data center. Practically speaking, and for the sake of simplicity, imagine that each server just hosts a minimal number of physical InfiniBand HCAs (Host Channel Adapter) with two links (for redundancy) onto the Oracle Fabric Interconnect director. Using the Oracle Fabric Manager software, you'll then be able to create virtual NICs and HBAs (called vNIC and vHBA) that will be seen by the servers as standard NICs and HBAs and associate them to networks and storage systems which are physically connected to the back-end of the director through standard Fibre Channel and Ethernet GbE/10GbE ports. In addition to this incredibly simple "at-a-click" connectivity capability, the Oracle Virtual Networking solution offers powerful features such as network isolation, Quality of Service, advanced performance monitoring and non-disruptive reconfiguration, migration and scalability of networking infrastructure. So let's go back now to our initial question: why is Oracle Virtual Networking especially important to Oracle's storage solutions? After all, one could connect any storage in the back-end of the Oracle Fabric Interconnect directors, right? The answer is pretty simple: since Oracle owns both the virtualized networking infrastructure and the storage (ZFS-SA, Pillar Axiom and tape), it is possible to imagine several ways in the future to add value when it comes to connect storage to a virtualized storage network: enhanced storage capabilities, converged management between storage and network, improved diagnostic capabilities and optimized integration resulting in higher performance and unique features/functions. Of course, all this is not going to be done overnight, and future will tell us is which evolutions come first. But there is little doubt that the integration of Xsigo within Oracle is going to create opportunities for Oracle's storage!

    Read the article

  • Windows Azure Recipe: Consumer Portal

    - by Clint Edmonson
    Nearly every company on the internet has a web presence. Many are merely using theirs for informational purposes. More sophisticated portals allow customers to register their contact information and provide some level of interaction or customer support. But as our understanding of how consumers use the web increases, the more progressive companies are taking advantage of social web and rich media delivery to connect at a deeper level with the consumers of their goods and services. Drivers Cost reduction Scalability Global distribution Time to market Solution Here’s a sketch of how a Windows Azure Consumer Portal might be built out: Ingredients Web Role – this will host the core of the solution. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control (optional) – if identity needs to be tracked within the solution, the access control service combined with the Windows Identity Foundation framework provides out-of-the-box support for several social media platforms including Windows LiveID, Google, Yahoo!, Facebook. It also has a provider model to allow integration with other platforms as well. Caching (optional) – for sites with high traffic with lots of read-only data and lists, the distributed in-memory caching service can be used to cache and serve up static data at higher scale and speed than direct database requests. It can also be used to manage user session state. Blob Storage (optional) – for sites that serve up unstructured data such as documents, video, audio, device drivers, and more. The data is highly available and stored redundantly across data centers. Each entry in blob storage is provided with it’s own unique URL for direct access by the browser. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • Explicit behavior with checks vs. implicit behavior

    - by Silviu
    I'm not sure how to construct the question but I'm interested to know what do you guys think of the following situations and which one would you prefer. We're working at a client-server application with winforms. And we have a control that has some fields automatically calculated upon filling another field. So we're having a field currency which when filled by the user would determine an automatic filling of another field, maybe more fields. When the user fills the currency field, a Currency object would be retrieved from a cache based on the string introduced by the user. If entered currency is not found in the cache a null reference is returned by the cache object. Further down when asking the application layer to compute the other fields based on the currency, given a null currency a null specific field would be returned. This way the default, implicit behavior is to clear all fields. Which is the expected behavior. What i would call the explicit implementation would be to verify that the Currency object is null in which case the depending fields are cleared explicitly. I think that the latter version is more clear, less error prone and more testable. But it implies a form of redundancy. The former version is not as clear and it implies a certain behavior from the application layer which is not expressed in the tests. Maybe in the lower layer tests but when the need arises to modify the lower layers, so that given a null currency something else should be returned, i don't think a test that says just that without a motivation is going to be an impediment for introducing a bug in upper layers. What do you guys think?

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Source Control Manager Backend

    - by Gabriel Parenza
    Hi Friends, What do you think is a better approach for Source Control Manager Backend. I am weighing File system vs Hosted Subversion service. Hosted Subversion-- (My company already has another group taking care of this) Advantages: * Zero maintenance on our end * Auto-backup and recovery * Reliability by auto-backup and file redundancy. * File history view in built, file merge, file diff On the other hand, while File system does not have the featured mentioned above but is much more simpler. Moreover, if files are hosted on Linux machine, which is backed up, it takes care of file system crash issues. Subversion will need working copies, which are going to be on this same Linux machine, and hence the need to not have an extra layer. Folks, I am looking for stronger reasons why I should take Subversion instead of keeping thing simple and going with File System. Let me know your opinions. Very thanks in advance, Gabriel. PS: I have explored few Commercial Source Manager, and have decide to go this route as it better suits our need.

    Read the article

  • Displaying Many-To-Many Database relationship in VB.NET 2008 with DataGrid, MS SQL 2008

    - by user337501
    Computer bombed while posting this, couldnt find a duplicate question but if there is one, forgive me. So, I've run into a wall. And rather than use a ladder to avoid it, I'd like go through it. I'm setting up what I can best describe as a many-to-many relationship in a database. To examplify, imagine I have three primary tables: Items, Categories, Sections(nevermind the potential redundancy) Then I have another table, Properties. Items, Categories, and Sections can be associated with many properties. A single property can be associated with one, all, or none of the other tables. The best way I can figure to do this is to have join tables make the relationship. i.e. tblItems----(Foreign Key)----tblItems_To_Properties----(Foreign Key)----tblProperties In this example, tblItems simply has an "ItemID" Primary Key. tblItems_To_Properties has its own Primary Key(tblItems_To_PropertiesID), a Foreign Key to the Item(ItemID) and a Foreign key to the Property(PropertyID). The Properties table simply has its primary key(PropertyID) I hope this example isnt too confusing...if I have to I can find a way to put a diagram up or something. My problem is, I want to display this in a DataGrid using the Master-Detail method(DevExpress GridControl). I use the tblItems as a test, and I can see the Items in the parent view, but in the child view I see(understandably) the join table and that is it. My goal is to make it so the Grid ignores the join table and shows the Properties table as the only child. Any help on this method or insight into another solution would be muuuuuuuch appreciat

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17  | Next Page >