Search Results

Search found 6438 results on 258 pages for 'layer groups'.

Page 34/258 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Add webservice reference, the classes in different file

    - by ArunDhaJ
    Hi, When I add webservice as service reference in my .Net project, it creates a service folder within "Service References". All the interfaces and classes are contained within that service folder. I wanted to know how to split the interface's method and classes. Actually I wanted the web service reference to import classes defined in different dll. I wanted to define this way because of my design constraints. I've 3 layered application. Of which third layer is communication layer which holds all web service references. second is business layer and first is presentation layer. If the class declarations are in layer-3 and I'm accessing those classes from presentation layer, it is logically a cross-layer-access-violation. Instead, I wanted a separate project which holds only the class declarations and this would be used in all 3 layers. I didn't faced any problems to achieve this with layer-1 and layer-2. But, I'm not sure how to make communication layer to use this common declaration dll. Your suggestion would help me to design my application better. Thanks in advance Regards ArunDhaJ

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Custom Content Pipeline with Automatic Serialization Load Error

    - by Direweasel
    I'm running into this error: Error loading "desert". Cannot find type TiledLib.MapContent, TiledLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.InstantiateTypeReader(String readerTypeName, ContentReader contentReader, ContentTypeReader& reader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(String readerTypeName, ContentReader contentReader, List1& newTypeReaders) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 When trying to run the game. This is a basic demo to try and utilize a separate project library called TiledLib. I have four projects overall: TiledLib (C# Class Library) TiledTest (Windows Game) TiledTestContent (Content) TMX CP Ext (Content Pipeline Extension Library) TiledLib contains MapContent which is throwing the error, however I believe this may just be a generic error with a deeper root problem. EMX CP Ext contains one file: MapProcessor.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Content.Pipeline; using Microsoft.Xna.Framework.Content.Pipeline.Graphics; using Microsoft.Xna.Framework.Content.Pipeline.Processors; using Microsoft.Xna.Framework.Content; using TiledLib; namespace TMX_CP_Ext { // Each tile has a texture, source rect, and sprite effects. [ContentSerializerRuntimeType("TiledTest.Tile, TiledTest")] public class DemoMapTileContent { public ExternalReference<Texture2DContent> Texture; public Rectangle SourceRectangle; public SpriteEffects SpriteEffects; } // For each layer, we store the size of the layer and the tiles. [ContentSerializerRuntimeType("TiledTest.Layer, TiledTest")] public class DemoMapLayerContent { public int Width; public int Height; public DemoMapTileContent[] Tiles; } // For the map itself, we just store the size, tile size, and a list of layers. [ContentSerializerRuntimeType("TiledTest.Map, TiledTest")] public class DemoMapContent { public int TileWidth; public int TileHeight; public List<DemoMapLayerContent> Layers = new List<DemoMapLayerContent>(); } [ContentProcessor(DisplayName = "TMX Processor - TiledLib")] public class MapProcessor : ContentProcessor<MapContent, DemoMapContent> { public override DemoMapContent Process(MapContent input, ContentProcessorContext context) { // build the textures TiledHelpers.BuildTileSetTextures(input, context); // generate source rectangles TiledHelpers.GenerateTileSourceRectangles(input); // now build our output, first by just copying over some data DemoMapContent output = new DemoMapContent { TileWidth = input.TileWidth, TileHeight = input.TileHeight }; // iterate all the layers of the input foreach (LayerContent layer in input.Layers) { // we only care about tile layers in our demo TileLayerContent tlc = layer as TileLayerContent; if (tlc != null) { // create the new layer DemoMapLayerContent outLayer = new DemoMapLayerContent { Width = tlc.Width, Height = tlc.Height, }; // we need to build up our tile list now outLayer.Tiles = new DemoMapTileContent[tlc.Data.Length]; for (int i = 0; i < tlc.Data.Length; i++) { // get the ID of the tile uint tileID = tlc.Data[i]; // use that to get the actual index as well as the SpriteEffects int tileIndex; SpriteEffects spriteEffects; TiledHelpers.DecodeTileID(tileID, out tileIndex, out spriteEffects); // figure out which tile set has this tile index in it and grab // the texture reference and source rectangle. ExternalReference<Texture2DContent> textureContent = null; Rectangle sourceRect = new Rectangle(); // iterate all the tile sets foreach (var tileSet in input.TileSets) { // if our tile index is in this set if (tileIndex - tileSet.FirstId < tileSet.Tiles.Count) { // store the texture content and source rectangle textureContent = tileSet.Texture; sourceRect = tileSet.Tiles[(int)(tileIndex - tileSet.FirstId)].Source; // and break out of the foreach loop break; } } // now insert the tile into our output outLayer.Tiles[i] = new DemoMapTileContent { Texture = textureContent, SourceRectangle = sourceRect, SpriteEffects = spriteEffects }; } // add the layer to our output output.Layers.Add(outLayer); } } // return the output object. because we have ContentSerializerRuntimeType attributes on our // objects, we don't need a ContentTypeWriter and can just use the automatic serialization. return output; } } } TiledLib contains a large amount of files including MapContent.cs using System; using System.Collections.Generic; using System.Globalization; using System.Xml; using Microsoft.Xna.Framework.Content.Pipeline; namespace TiledLib { public enum Orientation : byte { Orthogonal, Isometric, } public class MapContent { public string Filename; public string Directory; public string Version = string.Empty; public Orientation Orientation; public int Width; public int Height; public int TileWidth; public int TileHeight; public PropertyCollection Properties = new PropertyCollection(); public List<TileSetContent> TileSets = new List<TileSetContent>(); public List<LayerContent> Layers = new List<LayerContent>(); public MapContent(XmlDocument document, ContentImporterContext context) { XmlNode mapNode = document["map"]; Version = mapNode.Attributes["version"].Value; Orientation = (Orientation)Enum.Parse(typeof(Orientation), mapNode.Attributes["orientation"].Value, true); Width = int.Parse(mapNode.Attributes["width"].Value, CultureInfo.InvariantCulture); Height = int.Parse(mapNode.Attributes["height"].Value, CultureInfo.InvariantCulture); TileWidth = int.Parse(mapNode.Attributes["tilewidth"].Value, CultureInfo.InvariantCulture); TileHeight = int.Parse(mapNode.Attributes["tileheight"].Value, CultureInfo.InvariantCulture); XmlNode propertiesNode = document.SelectSingleNode("map/properties"); if (propertiesNode != null) { Properties = new PropertyCollection(propertiesNode, context); } foreach (XmlNode tileSet in document.SelectNodes("map/tileset")) { if (tileSet.Attributes["source"] != null) { TileSets.Add(new ExternalTileSetContent(tileSet, context)); } else { TileSets.Add(new TileSetContent(tileSet, context)); } } foreach (XmlNode layerNode in document.SelectNodes("map/layer|map/objectgroup")) { LayerContent layerContent; if (layerNode.Name == "layer") { layerContent = new TileLayerContent(layerNode, context); } else if (layerNode.Name == "objectgroup") { layerContent = new MapObjectLayerContent(layerNode, context); } else { throw new Exception("Unknown layer name: " + layerNode.Name); } // Layer names need to be unique for our lookup system, but Tiled // doesn't require unique names. string layerName = layerContent.Name; int duplicateCount = 2; // if a layer already has the same name... if (Layers.Find(l => l.Name == layerName) != null) { // figure out a layer name that does work do { layerName = string.Format("{0}{1}", layerContent.Name, duplicateCount); duplicateCount++; } while (Layers.Find(l => l.Name == layerName) != null); // log a warning for the user to see context.Logger.LogWarning(string.Empty, new ContentIdentity(), "Renaming layer \"{1}\" to \"{2}\" to make a unique name.", layerContent.Type, layerContent.Name, layerName); // save that name layerContent.Name = layerName; } Layers.Add(layerContent); } } } } I'm lost as to why this is failing. Thoughts? -- EDIT -- After playing with it a bit, I would think it has something to do with referencing the projects. I'm already referencing the TiledLib within my main windows project (TiledTest). However, this doesn't seem to make a difference. I can place the dll generated from the TiledLib project into the debug folder of TiledTest, and this causes it to generate a different error: Error loading "desert". Cannot find ContentTypeReader for Microsoft.Xna.Framework.Content.Pipeline.ExternalReference`1[Microsoft.Xna.Framework.Content.Pipeline.Graphics.Texture2DContent]. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper..ctor(ContentTypeReaderManager manager, FieldInfo fieldInfo, PropertyInfo propertyInfo, Type memberType, Boolean canWrite) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper.TryCreate(ContentTypeReaderManager manager, Type declaringType, FieldInfo fieldInfo) at Microsoft.Xna.Framework.Content.ReflectiveReader1.Initialize(ContentTypeReaderManager manager) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 This is all incredibly frustrating as the demo doesn't appear to have any special linking properties. The TiledLib I am utilizing is from Nick Gravelyn, and can be found here: https://bitbucket.org/nickgravelyn/tiledlib. The demo it comes with works fine, and yet in recreating I always run into this error.

    Read the article

  • when to introduce an application services tier in an n-tier application

    - by user20358
    I am developing a web based application whose primary objective is to fetch data from the database, display it on the UI, take in user inputs and write them back to the database. The application is not going to be doing any industrial strength algorithm crunching, but will be receiving a very high number of hits at peak times (described below) which will be changing thru the day. The layers are your typical Presentation, Business, Data. The data layer is taken care of by the database server. The business layer will contain the DAL component to access the database server over tcp. The choices I have to separate these layers into tiers are: The presentation and business layers can be either kept on the same tier. The presentation layer on a separate tier by itself and the business layer on a separate tier by itself. In the case of choice 2, the business layer will be accessed by the presentation layer using a WCF service either over http or tcp. I don't see any heavy processing being done on the Business layer, so I am leaning towards option 1 above. I also feel for the same reason, adding a new tier will only introduce the network latency. However, in terms of scalability in case I need to scale up or scale out, which is a better way to go? This application will need to be able to support up to 6 million users an hour. There will be a reasonable amount of data in each user session, storing user's preferences and other details. I will be using page level caching as well.

    Read the article

  • MapReduce in DryadLINQ and PLINQ

    - by JoshReuben
    MapReduce See http://en.wikipedia.org/wiki/Mapreduce The MapReduce pattern aims to handle large-scale computations across a cluster of servers, often involving massive amounts of data. "The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The developer expresses the computation as two Func delegates: Map and Reduce. Map - takes a single input pair and produces a set of intermediate key/value pairs. The MapReduce function groups results by key and passes them to the Reduce function. Reduce - accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's Reduce function via an iterator." the canonical MapReduce example: counting word frequency in a text file.     MapReduce using DryadLINQ see http://research.microsoft.com/en-us/projects/dryadlinq/ and http://connect.microsoft.com/Dryad DryadLINQ provides a simple and straightforward way to implement MapReduce operations. This The implementation has two primary components: A Pair structure, which serves as a data container. A MapReduce method, which counts word frequency and returns the top five words. The Pair Structure - Pair has two properties: Word is a string that holds a word or key. Count is an int that holds the word count. The structure also overrides ToString to simplify printing the results. The following example shows the Pair implementation. public struct Pair { private string word; private int count; public Pair(string w, int c) { word = w; count = c; } public int Count { get { return count; } } public string Word { get { return word; } } public override string ToString() { return word + ":" + count.ToString(); } } The MapReduce function  that gets the results. the input data could be partitioned and distributed across the cluster. 1. Creates a DryadTable<LineRecord> object, inputTable, to represent the lines of input text. For partitioned data, use GetPartitionedTable<T> instead of GetTable<T> and pass the method a metadata file. 2. Applies the SelectMany operator to inputTable to transform the collection of lines into collection of words. The String.Split method converts the line into a collection of words. SelectMany concatenates the collections created by Split into a single IQueryable<string> collection named words, which represents all the words in the file. 3. Performs the Map part of the operation by applying GroupBy to the words object. The GroupBy operation groups elements with the same key, which is defined by the selector delegate. This creates a higher order collection, whose elements are groups. In this case, the delegate is an identity function, so the key is the word itself and the operation creates a groups collection that consists of groups of identical words. 4. Performs the Reduce part of the operation by applying Select to groups. This operation reduces the groups of words from Step 3 to an IQueryable<Pair> collection named counts that represents the unique words in the file and how many instances there are of each word. Each key value in groups represents a unique word, so Select creates one Pair object for each unique word. IGrouping.Count returns the number of items in the group, so each Pair object's Count member is set to the number of instances of the word. 5. Applies OrderByDescending to counts. This operation sorts the input collection in descending order of frequency and creates an ordered collection named ordered. 6. Applies Take to ordered to create an IQueryable<Pair> collection named top, which contains the 100 most common words in the input file, and their frequency. Test then uses the Pair object's ToString implementation to print the top one hundred words, and their frequency.   public static IQueryable<Pair> MapReduce( string directory, string fileName, int k) { DryadDataContext ddc = new DryadDataContext("file://" + directory); DryadTable<LineRecord> inputTable = ddc.GetTable<LineRecord>(fileName); IQueryable<string> words = inputTable.SelectMany(x => x.line.Split(' ')); IQueryable<IGrouping<string, string>> groups = words.GroupBy(x => x); IQueryable<Pair> counts = groups.Select(x => new Pair(x.Key, x.Count())); IQueryable<Pair> ordered = counts.OrderByDescending(x => x.Count); IQueryable<Pair> top = ordered.Take(k);   return top; }   To Test: IQueryable<Pair> results = MapReduce(@"c:\DryadData\input", "TestFile.txt", 100); foreach (Pair words in results) Debug.Print(words.ToString());   Note: DryadLINQ applications can use a more compact way to represent the query: return inputTable         .SelectMany(x => x.line.Split(' '))         .GroupBy(x => x)         .Select(x => new Pair(x.Key, x.Count()))         .OrderByDescending(x => x.Count)         .Take(k);     MapReduce using PLINQ The pattern is relevant even for a single multi-core machine, however. We can write our own PLINQ MapReduce in a few lines. the Map function takes a single input value and returns a set of mapped values àLINQ's SelectMany operator. These are then grouped according to an intermediate key à LINQ GroupBy operator. The Reduce function takes each intermediate key and a set of values for that key, and produces any number of outputs per key à LINQ SelectMany again. We can put all of this together to implement MapReduce in PLINQ that returns a ParallelQuery<T> public static ParallelQuery<TResult> MapReduce<TSource, TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source .SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } the map function takes in an input document and outputs all of the words in that document. The grouping phase groups all of the identical words together, such that the reduce phase can then count the words in each group and output a word/count pair for each grouping: var files = Directory.EnumerateFiles(dirPath, "*.txt").AsParallel(); var counts = files.MapReduce( path => File.ReadLines(path).SelectMany(line => line.Split(delimiters)), word => word, group => new[] { new KeyValuePair<string, int>(group.Key, group.Count()) });

    Read the article

  • Should I make the Cells in a Tiledmap as null when my player hits it

    - by Vishal Kumar
    I am making a Tile Based game using Libgdx. I took the idea from SuperKoalio platformer demo by Mario Zencher. When I wanted to implement Collectables in my game , I simply draw the coins using Tiled Map Editor. When my player hits that, I use to set that cell as null. Someday on this site suggested me not to do so... never use null. I agreed. What can be any other way. If I am using layer.setCell(x,y) to set the cell to any other cell... even if an transparent one .. my player seems to be stopped by an invisible object/hurdle. This is my code: for (Rectangle tile : tiles) { if (koalaRect.overlaps(tile)) { TiledMapTileLayer layer = (TiledMapTileLayer) map.getLayers().get(1); try{ type = layer.getCell((int) tile.x, (int) tile.y).getTile().getProperties().get("tileType").toString(); } catch(Exception e){ System.out.print("Exception in Tiles Property"+e); type="nonbreakable"; } //Let us destroy this cell if(("award".equals(type))){ layer.setCell((int) tile.x, (int) tile.y, null); listener.coin(); score+=100; test = ""+layer.getCell(0, 0).getTile().getProperties().get("tileType"); } //DOING THIS GIVES A BAD EFFECT if(("killer".equals(type))){ //player.health--; //layer.setCell((int) tile.x, (int) tile.y, layer.getCell(20,0)); } // we actually reset the player y-position here // so it is just below/above the tile we collided with // this removes bouncing :) if (player.velocity.y > 0) { player.position.y = (tile.y - Player.height); } Is this a right approach? OR I should create separate Sprite Class called Coin.

    Read the article

  • MVC + 3 tier; where ViewModels come into play?

    - by mikhairu
    I'm designing a 3-tiered application using ASP.NET MVC 4. I used the following resources as a reference. CodeProject: MVC + N-tier + Entity Framework Separating data access in ASP.NET MVC I have the following desingn so far. Presentation Layer (PL) (main MVC project, where M of MVC was moved to Data Access Layer): MyProjectName.Main Views/ Controllers/ ... Business Logic Layer (BLL): MyProjectName.BLL ViewModels/ ProjectServices/ ... Data Access Layer (DAL): MyProjectName.DAL Models/ Repositories.EF/ Repositories.Dapper/ ... Now, PL references BLL and BLL references DAL. This way lower layer does not depend on the one above it. In this design PL invokes a service of the BLL. PL can pass a View Model to BLL and BLL can pass a View Model back to PL. Also, BLL invokes DAL layer and DAL layer can return a Model back to BLL. BLL can in turn build a View Model and return it to PL. Up to now this pattern was working for me. However, I've ran into a problem where some of my ViewModels require joins on several entities. In the plain MVC approach, in the controller I used a LINQ query to do joins and then select new MyViewModel(){ ... }. But now, in the DAL I do not have access to where ViewModels are defined (in the BLL). This means I cannot do joins in DAL and return it to BLL. It seems I have to do separate queries in DAL (instead of joins in one query) and BLL would then use the result of these to build a ViewModel. This is very inconvenient, but I don't think I should be exposing DAL to ViewModels. Any ideas how I can solve this dilemma? Thanks.

    Read the article

  • Triggering Data Changes in N-Tier

    - by Ryan Kinal
    I've been studying n-tier architectures as of late, particularly in VB.NET with Entity Framework and/or LINQ to SQL. I understand the basic concepts, but have been wondering about best practices in regard to triggering CRUD-type operations from user input/action. So, the arcitecture looks something like the following: [presentation layer] - [business layer] - [data layer] - (database) Getting information from the database into the presentation layer is simple and abstracted. It's just a matter of instantiating a new object from the business layer, which in turn uses the data layer to get at the correct information. However, saving (updating and inserting), and deleting seem to require particular APIs on the relevant business objects. I have to assume this is standard practice, unless a business object will save itself on various operations (which seems inefficient), or on disposal (which seems like it just wouldn't work, or may be unwieldy and unreliable). Should my "savable" business objects all implement a particular "ISavable" or "IDatabaseObject" interface? Is this a recognized (anti-)pattern? Are there other, better patterns I should be using that I'm just unaware of? The TLDR question, I suppose, is How does the presentation layer trigger database changes?

    Read the article

  • Is It Incorrect to Make Domain Objects Aware of The Data Access Layer?

    - by Noah Goodrich
    I am currently working on rewriting an application to use Data Mappers that completely abstract the database from the Domain layer. However, I am now wondering which is the better approach to handling relationships between Domain objects: Call the necessary find() method from the related data mapper directly within the domain object Write the relationship logic into the native data mapper (which is what the examples tend to do in PoEAA) and then call the native data mapper function within the domain object. Either it seems to me that in order to preserve the 'Fat Model, Skinny Controller' mantra, the domain objects have to be aware of the data mappers (whether it be their own or that they have access to the other mappers in the system). Additionally it seems that Option 2 unnecessarily complicates the data access layer as it creates table access logic across multiple data mappers instead of confining it to a single data mapper. So, is it incorrect to make the domain objects aware of the related data mappers and to call data mapper functions directly from the domain objects? Update: These are the only two solutions that I can envision to handle the issue of relations between domain objects. Any example showing a better method would be welcome.

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • how do I list Distribution Group (List) and their members inside of an OU using AD or exchange 2010

    - by wraak
    our entire domain has thousands of distribution groups, while i can use the script referenced here: How to get a list of all Distribution Lists and their Members in Exchange 2007? to pull all distribution groups and their members, it would be too hard to filter through all results. I particularilly need to pull either a. (preferred) all groups (both distribution and security) and their members inside of an OU (this particular OU contains over 100 hundred groups) or b. all groups and members matching a name starting with exampl* dsquery | dsget looks like could almost serve that purpose however when i did: dsquery group "OU=my-department,DC=blah,DC=blahblah,DC=com" -name * | dsget group -members (-expand) c:\my-department.txt it displays only the members without showing which group they belong to. The output I need should have: group name, members and potentially expanded sub-groups. i am still researching on how to get this done, seems like i can somehow make the above referenced script to search only inside of an OU, but i am not very familiar with powershell. any help would be appreciated, thank you.

    Read the article

  • Efficient representation of Hierarchies in Hibernate.

    - by Alison G
    I'm having some trouble representing an object hierarchy in Hibernate. I've searched around, and haven't managed to find any examples doing this or similar - you have my apologies if this is a common question. I have two types which I'd like to persist using Hibernate: Groups and Items. * Groups are identified uniquely by a combination of their name and their parent. * The groups are arranged in a number of trees, such that every Group has zero or one parent Group. * Each Item can be a member of zero or more Groups. Ideally, I'd like a bi-directional relationship allowing me to get: * all Groups that an Item is a member of * all Items that are a member of a particular Group or its descendants. I also need to be able to traverse the Group tree from the top in order to display it on the UI. The basic object structure would ideally look like this: class Group { ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { ... } /** @return all direct children of this group */ Set<Group> getChildren() { ... } ... } class Item { ... /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { ... } ... } Originally, I had just made a simple bi-directional many-to-many relationship between Items and Groups, such that fetching all items in a group hierarchy required recursion down the tree, and fetching groups for an Item was a simple getter, i.e.: class Group { ... private Set<Item> items; private Set<Group> children; ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { Set<Item> allItems = new HashSet<Item>(); allItems.addAll(this.items); for(Group child : this.getChildren()) { allItems.addAll(child.getAllItems()); } return allItems; } /** @return all direct children of this group */ Set<Group> getChildren() { return this.children; } ... } class Item { ... private Set<Group> groups; /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { return this.groups; } ... } However, this resulted in multiple database requests to fetch the Items in a Group with many descendants, or for retrieving the entire Group tree to display in the UI. This seems very inefficient, especially with deeper, larger group trees. Is there a better or standard way of representing this relationship in Hibernate? Am I doing anything obviously wrong or stupid? My only other thought so far was this: Replace the group's id, parent and name fields with a unique "path" String which specifies the whole ancestry of a group, e.g.: /rootGroup /rootGroup/aChild /rootGroup/aChild/aGrandChild The join table between Groups and Items would then contain group_path and item_id. This immediately solves the two issues I was suffering previously: 1. The entire group hierarchy can be fetched from the database in a single query and reconstructed in-memory. 2. To retrieve all Items in a group or its descendants, we can select from group_item where group_path='N' or group_path like 'N/%' However, this seems to defeat the point of using Hibernate. All thoughts welcome!

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • How do I perform 'WHERE' on groups of rows?

    - by Drew
    I have a table, which looks like: +-----------+----------+ + person_id + group_id + +-----------+----------+ + 1 + 10 + + 1 + 20 + + 1 + 30 + + 2 + 10 + + 2 + 20 + + 3 + 10 + +-----------+----------+ I need a query such that only person_ids with groups 10 AND 20 AND 30 are returned (only person_id: 1). I am not sure how to do this, as from what I can see it would require me to group the rows by person_id and then select the rows which contain all group_ids. I'm looking for something which will preserve the use of keys without resorting to string operations on group_concat() or such.

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Custom rails route problem with 2.3.8 and Mongrel

    - by CHsurfer
    I have a controller called 'exposures' which I created automatically with the script/generate scaffold call. The scaffold pages work fine. I created a custom action called 'test' in the exposures controller. When I try to call the page (http://127.0.0.1:3000/exposures/test/1) I get a blank, white screen with no text at all in the source. I am using Rails 2.3.8 and mongrel in the development environment. There are no entries in development.log and the console that was used to open mongrel has the following error: You might have expected an instance of Array. The error occurred while evaluating nil.split D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/cgi_process.rb:52:in dispatch_cgi' D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:101:in dispatch_cgi' D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:27:in dispatch' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:76:in process' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:74:in synchronize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:74:in process' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:159:in process_client' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:158:in each' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:158:in process_client' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in initialize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in new' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in initialize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in new' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:282:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:281:in each' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:281:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:128:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/command.rb:212:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:281 D:/Rails/ruby/bin/mongrel_rails:19:in load' D:/Rails/ruby/bin/mongrel_rails:19 Here is the exposures_controller code: class ExposuresController < ApplicationController # GET /exposures # GET /exposures.xml def index @exposures = Exposure.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @exposures } end end #/exposure/graph/1 def graph @exposure = Exposure.find(params[:id]) project_name = @exposure.tender.project.name group_name = @exposure.tender.user.group.name tender_desc = @exposure.tender.description direction = "Cash Out" direction = "Cash In" if @exposure.supply currency_1_and_2 = "#{@exposure.currency_in} = #{@exposure.currency_out}" title = "#{project_name}:#{group_name}:#{tender_desc}/n" title += "#{direction}:#{currency_1_and_2}" factors = Array.new carrieds = Array.new days = Array.new @exposure.rates.each do |r| factors << r.factor carrieds << r.carried days << r.day.to_s end max = (factors+carrieds).max min = (factors+carrieds).min g = Graph.new g.title(title, '{font-size: 12px;}') g.set_data(factors) g.line_hollow(2, 4, '0x80a033', 'Bounces', 10) g.set_x_labels(days) g.set_x_label_style( 10, '#CC3399', 2 ); g.set_y_min(min*0.9) g.set_y_max(max*1.1) g.set_y_label_steps(5) render :text = g.render end def test render :text = "this works" end # GET /exposures/1 # GET /exposures/1.xml def show @exposure = Exposure.find(params[:id]) @graph = open_flash_chart_object(700,250, "/exposures/graph/#{@exposure.id}") #@graph = "/exposures/graph/#{@exposure.id}" respond_to do |format| format.html # show.html.erb format.xml { render :xml => @exposure } end end # GET /exposures/new # GET /exposures/new.xml def new @exposure = Exposure.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @exposure } end end # GET /exposures/1/edit def edit @exposure = Exposure.find(params[:id]) end # POST /exposures # POST /exposures.xml def create @exposure = Exposure.new(params[:exposure]) respond_to do |format| if @exposure.save flash[:notice] = 'Exposure was successfully created.' format.html { redirect_to(@exposure) } format.xml { render :xml => @exposure, :status => :created, :location => @exposure } else format.html { render :action => "new" } format.xml { render :xml => @exposure.errors, :status => :unprocessable_entity } end end end # PUT /exposures/1 # PUT /exposures/1.xml def update @exposure = Exposure.find(params[:id]) respond_to do |format| if @exposure.update_attributes(params[:exposure]) flash[:notice] = 'Exposure was successfully updated.' format.html { redirect_to(@exposure) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @exposure.errors, :status => :unprocessable_entity } end end end # DELETE /exposures/1 # DELETE /exposures/1.xml def destroy @exposure = Exposure.find(params[:id]) @exposure.destroy respond_to do |format| format.html { redirect_to(exposures_url) } format.xml { head :ok } end end end Clever readers will notice the 'graph' action. This is what I really want to work, but if I can't even get the test action working, then I'm sure I have no chance. Any ideas? I have restarted mongrel a few times with no change. Here is the output of Rake routes (but I don't believe this is the problem. The error would be in the form of and HTML error response). D:\Rails\rails_apps\fxrake routes (in D:/Rails/rails_apps/fx) DEPRECATION WARNING: Rake tasks in vendor/plugins/open_flash_chart/tasks are deprecated. Use lib/tasks instead. (called from D:/ by/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) rates GET /rates(.:format) {:controller="rates", :action="index"} POST /rates(.:format) {:controller="rates", :action="create"} new_rate GET /rates/new(.:format) {:controller="rates", :action="new"} edit_rate GET /rates/:id/edit(.:format) {:controller="rates", :action="edit"} rate GET /rates/:id(.:format) {:controller="rates", :action="show"} PUT /rates/:id(.:format) {:controller="rates", :action="update"} DELETE /rates/:id(.:format) {:controller="rates", :action="destroy"} tenders GET /tenders(.:format) {:controller="tenders", :action="index"} POST /tenders(.:format) {:controller="tenders", :action="create"} new_tender GET /tenders/new(.:format) {:controller="tenders", :action="new"} edit_tender GET /tenders/:id/edit(.:format) {:controller="tenders", :action="edit"} tender GET /tenders/:id(.:format) {:controller="tenders", :action="show"} PUT /tenders/:id(.:format) {:controller="tenders", :action="update"} DELETE /tenders/:id(.:format) {:controller="tenders", :action="destroy"} exposures GET /exposures(.:format) {:controller="exposures", :action="index"} POST /exposures(.:format) {:controller="exposures", :action="create"} new_exposure GET /exposures/new(.:format) {:controller="exposures", :action="new"} edit_exposure GET /exposures/:id/edit(.:format) {:controller="exposures", :action="edit"} exposure GET /exposures/:id(.:format) {:controller="exposures", :action="show"} PUT /exposures/:id(.:format) {:controller="exposures", :action="update"} DELETE /exposures/:id(.:format) {:controller="exposures", :action="destroy"} currencies GET /currencies(.:format) {:controller="currencies", :action="index"} POST /currencies(.:format) {:controller="currencies", :action="create"} new_currency GET /currencies/new(.:format) {:controller="currencies", :action="new"} edit_currency GET /currencies/:id/edit(.:format) {:controller="currencies", :action="edit"} currency GET /currencies/:id(.:format) {:controller="currencies", :action="show"} PUT /currencies/:id(.:format) {:controller="currencies", :action="update"} DELETE /currencies/:id(.:format) {:controller="currencies", :action="destroy"} projects GET /projects(.:format) {:controller="projects", :action="index"} POST /projects(.:format) {:controller="projects", :action="create"} new_project GET /projects/new(.:format) {:controller="projects", :action="new"} edit_project GET /projects/:id/edit(.:format) {:controller="projects", :action="edit"} project GET /projects/:id(.:format) {:controller="projects", :action="show"} PUT /projects/:id(.:format) {:controller="projects", :action="update"} DELETE /projects/:id(.:format) {:controller="projects", :action="destroy"} groups GET /groups(.:format) {:controller="groups", :action="index"} POST /groups(.:format) {:controller="groups", :action="create"} new_group GET /groups/new(.:format) {:controller="groups", :action="new"} edit_group GET /groups/:id/edit(.:format) {:controller="groups", :action="edit"} group GET /groups/:id(.:format) {:controller="groups", :action="show"} PUT /groups/:id(.:format) {:controller="groups", :action="update"} DELETE /groups/:id(.:format) {:controller="groups", :action="destroy"} users GET /users(.:format) {:controller="users", :action="index"} POST /users(.:format) {:controller="users", :action="create"} new_user GET /users/new(.:format) {:controller="users", :action="new"} edit_user GET /users/:id/edit(.:format) {:controller="users", :action="edit"} user GET /users/:id(.:format) {:controller="users", :action="show"} PUT /users/:id(.:format) {:controller="users", :action="update"} DELETE /users/:id(.:format) {:controller="users", :action="destroy"} /:controller/:action/:id /:controller/:action/:id(.:format) D:\Rails\rails_apps\fxrails -v Rails 2.3.8 Thanks in advance for the help -Jon

    Read the article

  • Photoshop script to get the color of a solid fill layer?

    - by gruner
    I'm trying to write a Photoshop jsx script for extracting color values from a PSD template. The colors are defined as separate fill layers that I'd like to be able to loop through and create a hash of {layer_name: #hex_color} values. I'm not finding any documentation on reading the color value of the fill layer.

    Read the article

  • Linq-to-SQL: How to perform a count on a sub-select

    - by Peter Bridger
    I'm still trying to get my head round how to use LINQ-to-SQL correctly, rather than just writing my own sprocs. In the code belong a userId is passed into the method, then LINQ uses this to get all rows from the GroupTable tables matching the userId. The primary key of the GroupUser table is GroupUserId, which is a foreign key in the Group table. /// <summary> /// Return summary details about the groups a user belongs to /// </summary> /// <param name="userId"></param> /// <returns></returns> public List<Group> GroupsForUser(int userId) { DataAccess.KINv2DataContext db = new DataAccess.KINv2DataContext(); List<Group> groups = new List<Group>(); groups = (from g in db.Groups join gu in db.GroupUsers on g.GroupId equals gu.GroupId where g.Active == true && gu.UserId == userId select new Group { Name = g.Name, CreatedOn = g.CreatedOn }).ToList<Group>(); return groups; } } This works fine, but I'd also like to return the total number of Users who are in a group and also the total number of Contacts that fall under ownership of the group. Pseudo code ahoy! /// <summary> /// Return summary details about the groups a user belongs to /// </summary> /// <param name="userId"></param> /// <returns></returns> public List<Group> GroupsForUser(int userId) { DataAccess.KINv2DataContext db = new DataAccess.KINv2DataContext(); List<Group> groups = new List<Group>(); groups = (from g in db.Groups join gu in db.GroupUsers on g.GroupId equals gu.GroupId where g.Active == true && gu.UserId == userId select new Group { Name = g.Name, CreatedOn = g.CreatedOn, // ### This is the SQL I would write to get the data I want ### MemberCount = ( SELECT COUNT(*) FROM GroupUser AS GU WHERE GU.GroupId = g.GroupId ), ContactCount = ( SELECT COUNT(*) FROM Contact AS C WHERE C.OwnerGroupId = g.GroupId ) // ### End of extra code ### }).ToList<Group>(); return groups; } }

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >