Search Results

Search found 7470 results on 299 pages for 'storage engines'.

Page 45/299 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • Question regarding an NAS server and remote users

    - by JB
    I have a client that requires a massive amount of storage space but: Doesn't want to spend very much money Needs remote users (across the country) to be able to pull data from it and store to it, as well. Can this be done with an NAS Server such as the Western Digital Sharespace Network Storage System? I do not believe the client wants to spend over $1400 and HP is offering 8TB for $1299. Also, if anyone has any other ideas besides using NAS, please let me know. Thanks in advance for any help.

    Read the article

  • Windows 2008 terminal server - How to restrict access to DVD/floppy?

    - by test1839
    I has a very simple task. I need to block access to removable media (CD, DVD, floppy, USB drives etc.) on a Windows 2008 R2 Terminal Server for users and allow it for admins. I tried to enable the following policy in GPO: User Configuration/Administrative Templates/System/Removable Storage Access All Removable Storage classes: Deny all access = Enabled But it did not work. I tried different physical and virtual 2008 servers with the same result. It works on Windows 7 but not on Windows 2008. Has anyone had success with this parameter on Windows 2008? Thank you

    Read the article

  • Server vendor that allows 3rd party disks

    - by Alvin S
    As noted here, Dell is no longer allowing 3rd party disks to be used with their latest servers. As in, they don't work period. Which means that if you buy one of these boxes and want to upgrade the storage later, you have buy disks from Dell at significant premiums. Dell has just given me a very strong reason to take my server business elsewhere. My company buys (instead of leasing) our servers, and typically uses them for 5 years. I need to be able to upgrade/repurpose storage periodically, and do not want to be locked in to whatever Dell might have in stock, at inflated prices to boot. As you will see in the comments of the above link, it seems HP is doing the same thing. I am looking for a server vendor that offers 3-5 year warranty with same day/next day onsite service, and allows me to use 3rd party disks. Suggestions?

    Read the article

  • best filesystem for an aws s3 like service

    - by gucki
    Hi! I need to build a fault tolerant, highly available key/value storage (no posix, only same functionaluty as S3) using cheap existing hardware. The storage should be able to handle several billions of items. The maximum size of items is around 1GB, most are only several KB. What's the best software/ filesystem for this task? I already had a brief look at mogilefs, mongodb (grid-fs) & glusterfs but I'm not really sure which is stable & fault tolerant enough. The simpler the setup and later expansion the better :). Corin

    Read the article

  • Advantage of using nexenta vs. OpenSolaris

    - by jotango
    I am currently building a NAS for about 24 TB of storage. Video files, slow access, long term storage. No performance issues. I am currently undecided between buying a JBOD case and installing OpenSolaris (because of ZFS), or purchasing a Nexenta license. The difference is about $ 12.500 for licenses over three years. What would you see as the main advantage in purchasing a nexenta license, beside the support? Did nexenta really enhance the basic OpenSolaris, or is it just a lot of marketing speak? No one really wanted to answer that question.

    Read the article

  • Alternatives to FTP

    - by Jack Hickerson
    I need to share files with clients outside of my business and unfortunately our FTP server is becoming too much of a hassle (with regards to clients use of an ftp client and creating password protected downloads based on customized account privileges) Essentially, I need: a remote service that mimics an FTP server with a web interface (easy for basic internet users to comprehend). over 100gb of storage file transfer size over 2gb customizable user account privileges (password protected downloads) secure storage and data transfer preferably less then $100/mo I have already looked into some services that almost meet my requirements (StreamFile.com, box.net, onehub.com, filesanywhere.com)- has anyone used a service they would recommend? cheers, jack

    Read the article

  • Effecient organization of spare cables and hardware

    - by Jake Wharton
    As many of you also likely do, I have a growing collection of cables, hardware, and spare parts (screws, connectors, etc.). I'm looking to find a good system of organization so that everything isn't a tangled mess, mismatched, and potentially able to be damaged. Since the the three things listed above are all have varying sizes and degrees of delicacy this poises an interesting problem. Presently I have those cheap plastic storage bins you find at Wal-mart for everything. Cables that were once wrapped neatly have become tangled due to numerous "I know I have a cable for this" moments. Hardware is mixed in other bins with odds and ends with no protection from each other. NICs, CPUs, and HDDs are all interacting and likely causing damage. Finally there are stray parts sprinkled amongst these two both in plastic bags and loose. I'm looking to unify this storage into a controlled chaos. Here are my thoughts: Odds and ends are the easiest. Screws, connectors, and small electronic parts lend themselves perfectly to tackle boxes and jewelry boxes. Since these are usually dynamically compartmentalized I can adjust for the contents and label them on the outside or inside of the lid. Cables are easily wrangled with short velcro strips but that doesn't stop them from being all mixed in together. Hardware is the worst offender. Size, shape, and degree of delicacy changes with nearly every piece. I'm willing to sacrifice a bit of organization for a somewhat efficient manner. What are all your thoughts? What is the best type of tackle or jewelry box to use? Most of them are cheap and flimsy. Is there a better alternative? How can I organize cables to know exactly (within reason) where one is? What about associating cables with hardware (Wall adapter to router, etc.)? What kind of storage unit lends itself to all shapes of hardware? Do I need to separate by size or degree of delicacy for better organization?

    Read the article

  • VMware ESXi 4 On-Disk Data Deduplication - possible and supported?

    - by hurikhan77
    Environment: We are running multiple web, database, and application servers which usually share a pretty common installation (gentoo linux) and similar configuration in VMware ESXi 4. The differences are usually only some installed features or differing component versions. To create a new server, I usually choose the most similar (by features) running server, rsync a copy of it into freshly mounted filesystems, run grub, reconfigure and reboot. Problem: Over time this duplicates many on-disk data blocks which probably sums up to several 10's of gigabytes. I suppose if I could use a base system as template with the actual machines based on top of that, only writing changed blocks to some sort of "diff image", performance should improve (increased cache hit rate) and storage efficiency should increase (deduplicated storage space). This would be similar to what ESXi already supports for RAM deduplication (page sharing). Question: Is there any way to easily do this on ESXi 4? I already share the portage tree via NFS but this would not work for the rootfs.

    Read the article

  • Fast, reliable data transfers from/to China

    - by Nils
    We are a small company and we will need to transfer rather large amounts of data (10GB+ each time) between Europe and China in the near future. As many may have experienced, Internet connections to or from China can be rather unreliable and slow at times without any apparent reason. For example, while sending data to China via FTP generally works well, it can be painfully slow in the other direction. Currently, we are investigating new ways to have high transfer rates in both directions. So far we have tried: FTP (see above) FTP over VPN services (generally slower than direct connections) F2F (like Retroshare or Freenet - slow!!) Aspera (fast but expensive!) BitTorrent (unreachable end nodes, b/c of firewalls which we must not configure) We would like to try: Cloud storage (e.g. Amazon S3, Google Storage) - are those services always and reliably reachable from inside China? Point-to-Point VPN (currently not possible, b/c of the network, see above) I'd be especially grateful to hear from people who have already dealt with this kind of problem before.

    Read the article

  • What disk setup is needed / best practice for hypervisor-only servers?

    - by Luke404
    Planning to buy some servers to run an hypervisor (Citrix XenServer or VMware vSphere, still have to decide between the two) we'd like to boot off the local redundant SD card module offered by various vendors (eg. Dell, HP, etc...). The actual VMs will run from an existing iSCSI SAN (which, by the way, can't support booting the servers directly off the SAN). What are the reasons, if any, to choose completely diskless servers VS having some local storage? And what would be the guidelines to choose that local storage? (number of spindles, raid level, etc)

    Read the article

  • Network Drive Via Ethernet Port for Speed?

    - by Yar
    I have a Macbook with Firewire 400 and USB 2.0, so the only way I can get fast external storage is through the Ethernet port. A really fast firewire 800 drive on ANOTHER computer is actually much faster than the built-in drive (according to XBench). So I thought I would try to go one better and buy an ethernet-ready drive. I bought a Seagate GoFlex™ Home Network Storage System, and it seems like the only way to get it to work is to plug it into a router. Can this drive be used without a router (i.e., direct to computer)? Are there any drives that can be plugged directly into the ethernet port for fast access? I don't want the drive on my router: I want it on my computer. Ideally I'd need 7200rpm or faster, too... Update: Just chatted with Seagate and they said that this particular drive will not work that way. Will any others?

    Read the article

  • Any experience with SATA SAS Interposer Cards?

    - by korkman
    Driven by the current price difference between SATA and SAS disks on one side and the potentially bad behaviour of SATA disks in bigger storage arrays on the other side, I have found so-called SATA-to-SAS interposer cards. Advertised as "seamlessly adding SAS capabilities to existing SATA disk drives", I wonder if anyone here has had some experience with these or similar products. The major benefits I can identify are the increased cable voltage (if all drives are SAS connected), the ability to power-cycle the drive and multipath (if desired). Obviously the SATA drive will still have to be RAID edition. The question is: Do these cards indeed increase the overall reliability of a storage system, or will failing SATA disks cause trouble nevertheless? Edit: I'm not asking for hypothetical answers, only actual experience please. I'm well aware that the typical 10k SAS drive is more reliable (and better performing) than 7200 SATA drives. But how does a nearline SAS, which is phyiscally the same disk as its SATA counterpart, compare to the SATA version with interposer?

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • What is the "real" difference between a NAS and NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over standard ethernet, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over NFS on servers?

    Read the article

  • Email server - Disk quota sizes - suggestions?

    - by Ian H
    Working out a new server for an agency of 200 Employees - with approx 240 email accounts. Internally I'm arguing with myself over the amount of drive space to allocate to each user for the disk quota, I'm just looking for suggestions. Once i have a quota size decided, it will define the solution for storage. I've had everything from 4 GB per account ( which i feel is being generous ) down to 500 Mb ( with is rather restrictive in today's day and age. ) Thing is 4 GB per acocunt is just under 1 TB of allocated storage for email alone. Does anyone follow a "rule of thumb" or have thoughts on this? thanks in advance

    Read the article

  • Amazon S3: allow users to upload on a restricted basis (per bucket maybe)?

    - by Tom
    Hi there, I'm thinking about signing up to the Amazon S3 storage service. What I want to do is create a service where other people can register their own bucket with a certain amount of storage. These users will install my software, which then uploads their files. Of course, the users may only upload what they have paid for. For this to work I would like to create a separate bucket for each customer, each with its own properties. Question 1: is this possible with the API? How? This means that the installed software must have the rights needed to upload to my Amazon S3 account. Question 2: can I create individual authentication IDs for each bucket or customer, so that they can only upload with restrictions I have set? Thanks in advance.

    Read the article

  • What is the "real" difference between a NAS and NFS? Or, why pick a NAS device over "mere" NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over a standard ethernet port, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over just running NFS on servers?

    Read the article

  • Explained: EF 6 and “Could not determine storage version; a valid storage connection or a version hint is required.”

    - by Ken Cox [MVP]
    I have a legacy ASP.NET 3.5 web site that I’ve upgraded to a .NET 4 web application. At the same time, I upgraded to Entity Framework 6. Suddenly one of the pages returned the following error: [ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.]    System.Data.SqlClient.SqlVersionUtils.GetSqlVersion(String versionHint) +11372412    System.Data.SqlClient.SqlProviderServices.GetDbProviderManifest(String versionHint) +91    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +92 [ProviderIncompatibleException: The provider did not return a ProviderManifest instance.]    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +11431433    System.Data.Metadata.Edm.Loader.InitializeProviderManifest(Action`3 addError) +11370982    System.Data.EntityModel.SchemaObjectModel.Schema.HandleAttribute(XmlReader reader) +216 A search of the error message didn’t turn up anything helpful except that someone mentioned that the error messages was bogus in his case. The page in question uses the ASP.NET EntityDataSource control, consumed by a Telerik RadGrid. This is a fabulous combination for putting a huge amount of functionality on a page in a very short time. Unfortunately, the 6.0.1 release of EF6 doesn’t support EntityDataSource. According to the people in charge, support is planned but there’s no timeline for an EntityDataSource build that works with EF6.  I’m not sure what to do in the meantime. Should I back out EF6 or manually wire up the RadGrid? The upshot is that you might want to rethink plans to upgrade to Entity Framework 6 for Web forms projects if they rely on that handy control. It might also help to spend a User voice vote here:  http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3702890-support-for-asp-net-entitydatasource-and-dynamicda

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

  • Inference engine to calculate matching set according to internal rules

    - by Zecrates
    I have a set of objects with attributes and a bunch of rules that, when applied to the set of objects, provides a subset of those objects. To make this easier to understand I'll provide a concrete example. My objects are persons and each has three attributes: country of origin, gender and age group (all attributes are discrete). I have a bunch of rules, like "all males from the US", which correspond with subsets of this larger set of objects. I'm looking for either an existing Java "inference engine" or something similar, which will be able to map from the rules to a subset of persons, or advice on how to go about creating my own. I have read up on rule engines, but that term seems to be exclusively used for expert systems that externalize the business rules, and usually doesn't include any advanced form of inferencing. Here are some examples of the more complex scenarios I have to deal with: I need the conjunction of rules. So when presented with both "include all males" and "exclude all US persons in the 10 - 20 age group," I'm only interested in the males outside of the US, and the males within the US that are outside the 10 - 20 age group. Rules may have different priorities (explicitly defined). So a rule saying "exclude all males" will override a rule saying "include all US males." Rules may be conflicting. So I could have both an "include all males" and an "exclude all males" in which case the priorities will have to settle the issue. Rules are symmetric. So "include all males" is equivalent to "exclude all females." Rules (or rather subsets) may have meta rules (explicitly defined) associated with them. These meta rules will have to be applied in any case that the original rule is applied, or if the subset is reached via inferencing. So if a meta rule of "exclude the US" is attached to the rule "include all males", and I provide the engine with the rule "exclude all females," it should be able to inference that the "exclude all females" subset is equivalent to the "include all males" subset and as such apply the "exclude the US" rule additionally. I can in all likelihood live without item 5, but I do need all the other properties mentioned. Both my rules and objects are stored in a database and may be updated at any stage, so I'd need to instantiate the 'inference engine' when needed and destroy it afterward.

    Read the article

  • System.Runtime.InteropServices.COMException (0x80070008): Not enough storage is available to process

    - by Darryl Braaten
    I am trying to diagnose this exception. "System.Runtime.InteropServices.COMException (0x80070008): Not enough storage is available to process this command. (Exception from HRESULT: 0x80070008) at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(RuntimeType objectType) at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(Type objectType) at System.Runtime.Remoting.Activation.ActivationServices.CreateInstance(Type serverType) at System.Runtime.Remoting.Activation.ActivationServices.IsCurrentContextOK(Type serverType, Object[] props, Boolean bNewObj) at Oracle.DataAccess.Client.CThreadPool..ctor() at Oracle.DataAccess.Client.OracleCommand.set_CommandTimeout(Int32 value) ... It does not look like any of the normal types of "storage" have hit any limits. The application is using about 400MB of memory, 70 threads, 2000 handles and the hard drive has many GB free. The machine is running Windows 2003 Enterprise server with 16GB of RAM so memory shouldn't be an issue. The application is running as a windows service so there are no GDI objects being used. Running out of GDI handles is a common cause of this exception. Database connections, commands & readers are all all wrapped with using blocks so they should be getting cleaned up correctly.

    Read the article

  • Secure Password Storage and Transfer

    - by Andras Zoltan
    I'm developing a new user store for my organisation and am now tackling password storage. The concepts of salting, HMAC etc are all fine with me - and want to store the users' passwords either salted and hashed, HMAC hashed, or HMAC salted and hashed - not sure what the best way will be - but in theory it won't matter as it will be able to change over time if required. I want to have an XML & JSON service that can act as a Security Token Service for client-side apps. I've already developed one for another system, which requires that the client double-encrypts a clear-text password using SHA1 first and then HMACSHA1 using a 128 unique key (or nonce) supplied by the server for that session only. I'd like to repeat this technique for the new system - upgrading the algo to SHA256 (chosen since implementations are readily available for all aforementioned platforms - and it's much stronger than SHA1) - but there is a problem. If I'm storing the password as a salted hash in the user-store, the client will need to be sent that salt in order to construct the correct hash before being HMACd with the unique session key. This would completely go against the point of using a salt in the first place. Equally, if I don't use salt for password storage, but instead use HMAC, it's still the same problem. At the moment, the only solution I can see is to use naked SHA256 hashing for the password in the user store, so that I can then use this as a starting point on both the server and the client for a more secure salted/hmacd password transfer for the web service. This still leaves the user store vulnerable to a dictionary attack were it ever to be accessed; and however unlikely that might be - assuming it will never happen simply doesn't sit well with me. Greatly appreciate any input.

    Read the article

  • Validating Application Settings Key Values in Isolated Storage for Windows Phone Applications

    - by Martin Anderson
    Hello everyone. I am very new at all this c# Windows Phone programming, so this is most probably a dumb question, but I need to know anywho... IsolatedStorageSettings appSettings = IsolatedStorageSettings.ApplicationSettings; if (!appSettings.Contains("isFirstRun")) { firstrunCheckBox.Opacity = 0.5; MessageBox.Show("isFirstRun not found - creating as true"); appSettings.Add("isFirstRun", "true"); appSettings.Save(); firstrunCheckBox.Opacity = 1; firstrunCheckBox.IsChecked = true; } else { if (appSettings["isFirstRun"] == "true") { firstrunCheckBox.Opacity = 1; firstrunCheckBox.IsChecked = true; } else if (appSettings["isFirstRun"] == "false") { firstrunCheckBox.Opacity = 1; firstrunCheckBox.IsChecked = false; } else { firstrunCheckBox.Opacity = 0.5; } } I am trying to firstly check if there is a specific key in my Application Settings Isolated Storage, and then wish to make a CheckBox appear checked or unchecked depending on if the value for that key is "true" or "false". Also I am defaulting the opacity of the checkbox to 0.5 opacity when no action is taken upon it. With the code I have, I get the warnings Possible unintended reference comparison; to get a value comparison, cast the left hand side to type 'string' Can someone tell me what I am doing wrong. I have explored storing data in an Isolated Storage txt file, and that worked, I am now trying Application Settings, and will finally try to download and store an xml file, as well as create and store user settings into an xml file. I want to try understand all the options open to me, and use which ever runs better and quicker

    Read the article

  • Cache Class compilation error using parent-child relationships and cache sql storage

    - by Fred Altman
    I have the global listed below that I'm trying to create a couple of cache classes using sql stoarage for: ^WHEAIPP(1,26,1)=2 ^WHEAIPP(1,26,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,29,1)=2 ^WHEAIPP(1,29,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,93,1)=2 ^WHEAIPP(1,93,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,166,1)=2 ^WHEAIPP(1,166,1,1)="58407^^SMSNARE^58420" 2)="58407^59128^MPHILLIPS^59135" ^WHEAIPP(1,324,1)=2 ^WHEAIPP(1,324,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,419,1)=3 ^WHEAIPP(1,419,1,1)="59707^^SSNARE^59708" 2)="59707^^MPHILLIPS^59910,58000^^^^" 3)="59707^59981^SSNARE^60117,53241^^^^" The first two subscripts of the global (Hmo and Keen) make a unique entry. The third subscript (Seq) has a property (IppLineCount) which is the number of IppLines in the fourth subscript level (Seq2). I create the class WIppProv below which is the parent class: /// <PRE> /// ============================ /// Generated Class Definition /// Table: WMCA_B_IPP_PROV /// Generated by: FXALTMAN /// Generated on: 05/21/2012 13:46:41 /// Generator: XWESTblClsGenV2 /// ---------------------------- /// </PRE> Class XFXA.MCA.WIppProv Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .HMO Property Hmo As %Integer; /// .KEEN Property Keen As %Integer; /// .SEQ Property Seq As %String; Property IppLineCount As %Integer; Index iMaster On (Hmo, Keen, Seq) [ IdKey, Unique ]; Relationship IppLines As XFXA.MCA.WIppProvLine [ Cardinality = many, Inverse = relWIppProv ]; <Storage name="SQLMapping"> <DataLocation>^WHEAIPP</DataLocation> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <Data name="IppLineCount"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Global>^WHEAIPP</Global> <PopulationType>full</PopulationType> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <LoopInitValue>1</LoopInitValue> <Expression>{Seq}</Expression> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } This class compiles fine. Next I created the WIppProvLine class listed below and made a parent-child relationship between the two: /// Used to represent a single line of IPP data Class XFXA.MCA.WIppProvLine Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .CLM_AMT_ALLOWED node: 0 piece: 6<BR> /// This field should be used in conjunction with the Claim Operator field to /// define a whole claim dollar amount at which a particular claim should be /// flagged with a Pend status. Property ClmAmtAllowed As %String; /// .CLM_LINE_AMT_ALLOWED node: 0 piece: 8<BR> /// This field should be used in conjunction with the Clm Line Operator field to /// define a claim line dollar amount at which a particular claim should be flagged /// with a Pend status. Property ClmLineAmtAllowed As %String; /// .CLM_LINE_OP node: 0 piece: 7<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim line dollars above, below, or equal to a set /// amount. Property ClmLineOp As %String; /// .CLM_OP node: 0 piece: 5<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim dollars above, below, or equal to a set amount. Property ClmOp As %String; Property EffDt As %Date; Property Hmo As %Integer; /// .IPP_REASON node: 0 piece: 10<BR> /// IPP Reason Code Property IppCode As %Integer; Property Keen As %Integer; /// .LAST_CHG_DT node: 0 piece: 4<BR> /// Last Changed Date Property LastChgDt As %Date; /// .PX_DX_CDE_FLAG node: 0 piece: 9<BR> /// A Flag to indicate whether or not Procedure Codes or Diagnosis Codes are to be /// associated with this SIU Flag Type Entry. If the Flag = Y, then control would /// jump to a new screen where the user can enter the necessary codes. Property PxDxCdeFlag As %String; Property Seq As %String; Property Seq2 As %String; Index iMaster On (Hmo, Keen, Seq, Seq2) [ IdKey, PrimaryKey, Unique ]; /// .TERM_DT node: 0 piece: 2<BR> /// Term Date Property TermDt As %Date; /// .USER_INI node: 0 piece: 3 Property UserIni As %String; Relationship relWIppProv As XFXA.MCA.WIppProv [ Cardinality = one, Inverse = IppLines ]; Index relWIppProvIndex On relWIppProv; //Index NewIndex1 On (RelWIppProv, Seq2) [ IdKey, PrimaryKey, Unique ]; <Storage name="SQLMapping"> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <ConditionalWithHostVars></ConditionalWithHostVars> <Data name="ClmAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>6</Piece> </Data> <Data name="ClmLineAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>8</Piece> </Data> <Data name="ClmLineOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>7</Piece> </Data> <Data name="ClmOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>5</Piece> </Data> <Data name="EffDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Data name="Hmo"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>11</Piece> </Data> <Data name="IppCode"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>10</Piece> </Data> <Data name="LastChgDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>4</Piece> </Data> <Data name="PxDxCdeFlag"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>9</Piece> </Data> <Data name="TermDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>2</Piece> </Data> <Data name="UserIni"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>3</Piece> </Data> <Global>^WHEAIPP</Global> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <Expression>{Seq}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="4"> <AccessType>Sub</AccessType> <Expression>{Seq2}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvLineS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } When I try to compile this one I get the following error: ERROR #5502: Error compiling SQL Table 'XFXA_MCA.WIppProvLine %msg: Table XFXA_MCA.WIppProvLine has the following unmapped (not defined on the data map) fields: relWIppProv' ERROR #5030: An error occurred while compiling class XFXA.MCA.WIppProvLine Detected 1 errors during compilation in 2.745s. What am I doing wrong? Thanks in Advance, Fred

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >