Search Results

Search found 5933 results on 238 pages for 'mass storage'.

Page 124/238 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • How to decrease size of KVM virtual machine disk image

    - by Cerin
    How do you decrease or shrink the size of a KVM virtual machine disk? I allocated a virtual disk of 500GB (stored at /var/lib/libvirt/images/vm1.img), and I'm finding that overkill, so now I'd like to free up some of that space for use with other virtual machines. There seems to be a lot answers on how to increase image storage, but not decrease it. I found the virt-resize tool, but it only seems to work with raw disk partitions, not disk images.

    Read the article

  • Is the following combination of components valid to function as a desktop computer? [closed]

    - by Gideon Potgieter
    Could someone with more PC building experience than me tell me whether these PC components can cooperate fully as a self-made PC? Processor: Intel Core i5-3570K Video card: Asus Radeon HD 7870 Motherboard: Gigabyte GA-Z77-D3H RAM: Corsair CMZ16GX3M2A1600C10 Vengeance 16GB 1600MHz CL10 DDR3 (x2) Storage: Western Digital WD1002FAEX (x2) Display: Samsung S24B300HL Sound: Logitech X140 Chassis: Thermaltake V4 Black Edition VM30001W2Z Power supply: Seagate OEM 500W Builder PSU Optical drive: Asus DRW-24B1ST Thanks in advance! (btw, I know 32 GB RAM is unnecessary, but I want to buy it to use as a reserve)

    Read the article

  • Store ESX5 images on a NAS?

    - by Cylindric
    I have a basic NAS device (Buffallo LinkStation Duo) that can only present SMB fileshares to the network. For testing annd for getting a dying physical machine working I would like to store my VMDK on here. Is that possible, or will I have to find some way of presenting this as an iSCSI target of some sort? I've only used iSCSI or local storage in the past, but this isn't possible at the moment. Thanks.

    Read the article

  • Amazon EC2 & S3 costs - can they be tied to specific instances

    - by monkeymagic
    Hi, I'd like to start using S3 and EC2 to host some of my company's simpler websites. I would like to be able to identify all of the costs associated with running each site (instance run-time costs + storage + data transfers) so that the costs can be allocated (cross-charged) to business units in my company. Is it possible to identify all the costs associated with each site in this way if all of the sites are running on separate instances ? thanks

    Read the article

  • Status: 0xc00000e9

    - by Ryan Galloza
    When I start up windows normally it just freezes , I click "Launch Startup Repair" I get this: "Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer" How do I fix this problem?

    Read the article

  • How to use USB Key in Xen 5.6 Environment?

    - by Az
    I am looking for a way to use USB key on a guest OS running on a 5.6 Xen Server environment. The catch is that I need to actually detect in the Guest OS (Win2003 Server) like an actual USB Key. Attaching it as a storage drive doesnt work (It is a key with special attributes that servers as a licensing mechanism). Just wondering if anyone has had a similar need and found a good solution? Thanks, Nate

    Read the article

  • Use a RAID Controller without drivers?

    - by cian1500ww
    Ordered an Adaptec 1420SA RAID card for my Debian Squeeze media server but didn't check to see if it was compatible, turns out it's not because it uses something called hostRAID which requires special drivers that aren't available for Debian. Could I still use the card as an ordinary controller and just use OS software RAID?? I'm not looking for speed, just need to mirror some drives that will be used for storage, the OS will reside on a disk connected to the server's onboard controller so the system won't be booting from any drives on the Adaptec controller.

    Read the article

  • Size of a sharepoint web application

    - by Indra
    How do you figure out the current size of the sharepoint web application? Better yet, the size of a site collection or a subsite. I am planning to move a site collection from one farm to another. I need to plan the storage capacity first.

    Read the article

  • Up-to-date Comparison of High-Speed USB Flash Drives

    - by Zoredache
    I am looking for comparison of the performance of USB flash drives. I have found several older comparisons, but I am trying to find a more up-to-date comparisons that apply to the larger storage sizes (32-128GB). I can try looking up the specs of various drives, but vendors have been known to exaggerate, or use numbers that are on accurate in tests that do not reflect actual usage. I was hoping to find 3rd party site which had perform testing.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • File shares for Mac users

    - by Generic Error
    The main file shares on our network are currently hosted by old Apple XServes. I had planned to replace some of these with Windows shares as I have better hardware available but have been told this is likely to cause issues with some of our Mac users. What sort of issues am I likely to run into and what are the recommended ways of hosting general file storage in a mixed OS (Windows, OSX, occasionally linux) environment?

    Read the article

  • HP server delayed boot

    - by jjrab
    I'm currently using HP Proliant DL120 G5 servers running VMWare ESXi 4 to run server VM's. They are connecting to an iSCSI SAN for the shared storage. I'd like to implement a delayed boot of these hosts servers so that they don't boot up and try to connect to the SAN before the SAN is ready for connections after a power failure. Does anyone know of a good way to do this?

    Read the article

  • Free web gallery installation that can use existing directory hierarchy in filesystem?

    - by user1338062
    There are several different free software gallery projects (Gallery, Coppermine, etc), but as far as I know each of those creates a copy of imported images in their internal storage, be it directory structure or database). Is there any gallery software that would allow keeping existing directory hierarchy of media files (images, videos), as-is, and just store the meta-data of them in a database? I guess at least various NAS solutions ship with software like this.

    Read the article

  • Can a hard poweroff / outage / crash corrupt VMware snapshots?

    - by basic6
    Assuming a host system is running virtual machines (in VMware Workstation) and all their data is on a reliable storage (so no data corruption due to hdd failure). If that host crashes (kernel panic) while a vm is running, files on the virtual filesystem could be corrupted. But there's a snapshot (of the vm), taken before the crash. Is it safe to assume that reverting to the snapshot, the vm will be back in a clean state - or is there any way that this snapshot could have been corrupted by the crash?

    Read the article

  • Can't use Bootcamp partition for Windows 8 installation

    - by Hedge
    I'm trying to install Windows 8 with Bootcamp on my Macbook Pro. Sadly it won't let me get past the disk partition choice (even after formatting the Bootcamp-drive). It says: Windows can't be installed on this storage device. The chosen harddisk contains a MBR-partition-table. Windows can only be installed on GPT-harddisks on EFI-systems. freely translated What is going wrong here? Here's a photo:

    Read the article

  • Alternative to Dropbox (on my server)?

    - by jweede
    I love using Dropbox to sync files between all my machines, and I've heard it uses rsync internally to keep files synced. Sometimes I need to sync very large things, and I don't necessarily want to pay for storage space on someone else's server when I have my own. So does anyone know of any nice cross-platform (pref. open source) automatic file-sync applications out there for this? sidenote: Here is a Dropbox referral link, if you're feeling generous.

    Read the article

  • How to quickly remove hundreds of thousands of files? [closed]

    - by Nick
    Possible Duplicate: Doing an rm -rf on a massive directory tree takes hours I'm running a simulation program on a computing cluster (Scientific Linux) that generates hundreds of thousands of atomic coordinate files. But I'm having a problem deleting the files because rm -rf never completes and neither does find . -name * | xargs r Isn't there a way to just unlink this directory from the directory tree? The storage unit is used by hundreds of other people, so reformatting is not an option. Thanks

    Read the article

  • Robocopy. Delete files from source

    - by kurresmack
    Hey, We copied all our files to a new storage server recently. We didn't want to move at the time becuase we werent sure if files would get lost. The problem now is that we have files on both places! How can we move only the files that does not exists in the target and for those that exists in both places we delete it from the source? it is windows server 2008 Thanks

    Read the article

  • What filesystem to use when using both Windows and Linux?

    - by MighMoS
    I will be buying a 2TB hard drive soon, and would like to use it as media storage. I would like to be able to read/write from both Windows (version 7, 64bit) and Ubuntu Linux, and I need support for files greater than 4GB in size (so I think this rules out FAT32). I'm using IFS drives at the moment to access my linux ext4 partitions, and I find it unstable. Does this mean NTFS? Is there something else I'm missing?

    Read the article

  • Test Environment for Microsoft SQL Cluster

    - by user195643
    I have a the following test environment: 1 - Windows 2008 (DC Edition) Role - Active Directory 2 - Windows 2008 (Enterprise Servers) I would like to create a MSSQL Cluster in this test environment. I am using a desktop PC and I would like to know how I can configure second network card in order to configure the MSSQL Cluster? and how I can use a shared storage without using any external drives?

    Read the article

  • Quick method to determine SSD drive health?

    - by ewwhite
    I have an Intel X-25M drive that was marked "failed" twice in a ZFS storage array, as noted here. However, after removing the drive, it seems to to mount, read and write in other computers (Mac, PC, USB enclosure, etc.) Is there a good way to determine the drive's present health? I feel that the previous failure in the ZFS solution was the convergence of bugs, bad error reporting and hardware. It seems like this drive may have some life in it, though.

    Read the article

  • Determining Azure SQL Database requirements

    - by Gerald
    I'm looking into moving an SQL Server database project to the cloud using Azure SQL Database. I'm just wondering what metrics I can use from SQL Server to help determine what my needs will be on Azure. The size of the database is around 150GB, so I understand what my needs are in terms of storage, I'm just not sure what metrics I can use to translate my database usage to the DTU benchmark metrics that the various service tiers on Azure SQL use.

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >