Search Results

Search found 16547 results on 662 pages for 'physical design'.

Page 58/662 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Use synergy with Physical KVM

    - by Mr. Man
    I am using synergy on a Linux Mint computer as the server with a Mac as the client. I also have a physical KVM switch. The problem I have is that when ever I switch the physical KVM to my Mac, synergy stops working as in the keyboard and mouse don't work with the Mac. Thanks in advance! EDIT: here are some logs: From the Mint machine: INFO: synergys.cpp,1042: Synergy server 1.3.1 on Linux 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:04:26 UTC 2009 i686 DEBUG: synergys.cpp,1051: opening configuration synergy.conf DEBUG: synergys.cpp,1062: configuration read successfully DEBUG: CXWindowsScreen.cpp,847: XOpenDisplay(:0.0) DEBUG: CXWindowsScreenSaver.cpp,339: xscreensaver window: 0x00000000 DEBUG: CXWindowsScreen.cpp,117: screen shape: 0,0 1024x768 DEBUG: CXWindowsScreen.cpp,118: window is 0x03800004 DEBUG: CScreen.cpp,38: opened display DEBUG: CXWindowsScreen.cpp,679: registered hotkey F12 (id=efc9 mask=0000) as id=1 NOTE: synergys.cpp,500: started server INFO: CServer.cpp,1141: screen ubuntu shape changed NOTE: CClientListener.cpp,127: accepted client connection DEBUG: CClientProxy1_0.cpp,404: received client marks-mac.local info shape=-1024,0 2304x800 NOTE: CServer.cpp,278: client mac has connected INFO: CServer.cpp,447: switch from ubuntu to mac at -1024,393 INFO: CScreen.cpp,116: leaving screen DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWinavDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG302)DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDE47DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsrset=utf-8 (633), text/plain (462) DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: getDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXW8_STDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fD textDEBUG: CXWindowsClipboard.cpp,555: added fDEBU DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipinDEBUG: CXWindowsClipboard.cpp,555:oardDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp, s From the Mac: connecting to '192.168.3.5': 192.168.3.5:24800 connected to server entering screen leaving screen entering screen leaving screen stopped client

    Read the article

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

  • Physical-to-virtual for personal use

    - by Journeyman Geek
    One of my current projects involves reducing the number of old computers lying around. So far, the office systems have been downsized from 5 to 2. One system which had no data on it was dumped entirely, while three other identical boxes were consolidated into to one (I swap hard drives as needed and they just work(tm)). I want to make further cuts. The physical systems in question all run windows xp sp3, and were old Pentium 4s with 256 MB of RAM. Currently, I have got a VirtualBox running on another system. My intention is to migrate all services to that. We can assume that disk space and RAM on the host is not an issue. I'd rather not go with a client-server physical-to-virtual (P2V) method like what VirtualBox uses now. I have a few specific concerns as well - specifically whether I'd need to reactivate or sysprep for dissimilar hardware first. An offline tool for imaging (Windows or Linux) would not be an issue since the drives could be mounted onto another system I have anyway or I could use a LiveCD. What would be my options for P2V conversion, and what would I need to keep in mind when doing this?

    Read the article

  • How have multiple web servers and IPs on the same physical network

    - by jsigned
    I do web development out of a small office and need to have multiple physical and virtual servers that can be accessed from the internet. I also have a number of devices (computers, laptops, tablets, printers, etc) that need connections as well. I have gotten a subnet of 8 IP's from my ISP and while that is adequate for the web servers its far too small for everything that needs access to the network. My router is an ASUS RT-N16 running DD-WRT. I'm just smart enough about this routing topic to be dangerous, think 2 year old with a magic marker. I would like to keep my internal network NAT'ed on the 192.168.x.x network and route the 68.69.x.x 255.255.255.248 traffic directly to the servers. The physical network consists of the 4 port DD-WRT router and an unmanaged gig switch. I have a fiber connection to the office that works as an Ethernet port. In other words I can plug my laptop directly into it and have access to the internet. There is no login or password and the router is setup to get DHCP from the ISP, and to provide DHCP addresses for the internal network. What I've done so far is google and try different configurations with little success. In the end I decided I didn't even know how to ask the questions needed. My questions are: Is this the best way to configure the network? How do you do it? VLANs? Multiple routers? I've never had to configure a router using anything more than the GUI so if this is command line stuff be gentle.

    Read the article

  • windows xp blue screen dumping physical memory

    - by dotnet-practitioner
    I get following blue screen after running my laptop for an hour... A problem has been detected and windows has been shut down to prevent damange to your computer. If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to be sure you have adequate disk space. If a driver is identified in the stop message, disable the driver or check with the manufacturer for driver updates. Try changing video adapters. Check with your hardware vendor for any BIOS updates. Disable BIOS memory options such as cashing or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select advanced startup options, and the select safe mode. Technical Information: * STOP 0x0000008E (0xc0000005, 0x805B03F5, 0xF703DC7C, 0x00000000) Beginning dump of physical memory Physical memory dump complete. Contact you system administrator or technical support group for further assistance. so.... if this is a faulty memory.... from where I could buy RAM for following laptop.... TOSHIBA SATELLITE A45-S250 My local Frys store does not carry memory for this laptop.

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • C# Lack of Static Inheritance - What Should I Do?

    - by yellowblood
    Alright, so as you probably know, static inheritance is impossible in C#. I understand that, however I'm stuck with the development of my program. I will try to make it as simple as possible. Lets say our code needs to manage objects that are presenting aircrafts in some airport. The requirements are as follows: There are members and methods that are shared for all aircrafts There are many types of aircrafts, each type may have its own extra methods and members. There can be many instances for each aircraft type. Every aircraft type must have a friendly name for this type, and more details about this type. For example a class named F16 will have a static member FriendlyName with the value of "Lockheed Martin F-16 Fighting Falcon". Other programmers should be able to add more aircrafts, although they must be enforced to create the same static details about the types of the aircrafts. In some GUI, there should be a way to let the user see the list of available types (with the details such as FriendlyName) and add or remove instances of the aircrafts, saved, lets say, to some XML file. So, basically, if I could enforce inherited classes to implement static members and methods, I would enforce the aircraft types to have static members such as FriendlyName. Sadly I cannot do that. So, what would be the best design for this scenario?

    Read the article

  • How do you keep your business rules DRY?

    - by Mario
    I periodically ponder how to best design an application whose every business rule exists in just a single location. (While I know there is no proverbial “best way” and that designs are situational, people must have a leaning toward one practice or another.) I work for a shop where they prefer to house as much of the business rules as possible in the database. This requires developers in many cases to perform identical front-end validations to avoid sending data to the database that will result in an exception—not very DRY. It grates me anytime I find myself duplicating any kind of logic—even lowly validation logic. I am a single-point-of-truth purist to an anal degree. On the other end of the spectrum, I know of shops that create dumb databases (the Rails community leans in this direction) and handle all of the business logic in a separate tier (in Rails the models would house “most” of this). Note the word “most” which implies that some business logic does end up spilling into other places (in Rails it might spill over into the controllers). In way, a clean separation of concerns where all business logic exists in a single core location is a Utopian fantasy that’s hard to uphold (n-tiered architecture or not). Furthermore, is see the “Database as a fortress” and would agree that it should be built on constraints that cause it to reject bad data. As such, I hold principles that cause a degree of angst as I attempt to balance them. How do you balance the database-as-a-fortress view with the desire to have a single-point-of-truth?

    Read the article

  • AntFarm anti-pattern -- strategies to avoid, antidotes to help heal from

    - by alchemical
    I'm working on a 10 page web site with a database back-end. There are 500+ objects in use, trying to implement the MVP pattern in ASP.Net. I'm tracing the code-execution from a single-page, my finger has been on F-11 in Visual Studio for about 40 minutes, there seems to be no end, possibly 1000+ method calls for one web page! If it was just 50 objects that would be one thing, however, code execution snakes through all these objects just like millions of ants frantically woring in their giant dirt mound house, riddled with object tunnels. Hence, a new anti-pattern is born : AntFarm. AntFarm is also known as "OO-Madnes", "OO-Fever", OO-ADD, or simply design-pattern junkie. This is not the first time I've seen this, nor my associates at other companies. It seems that this style is being actively propogated, or in any case is a misunderstanding of the numerous OO/DP gospels going around... I'd like to introduce an anti-pattern to the anti-pattern: GST or "Get Stuff Done" AKA "Get Sh** done" AKA GRD (GetRDone). This pattern focused on just what it says, getting stuff done, in a simple way. I may try to outline it more in a later post, or please share your ideas on this antidote pattern. Anyway, I'm in the midst of a great example of AntFarm anti-pattern as I write (as a bonus, there is no documentation or comments). Please share you thoughts on how this anti-pattern has become so prevelant, how we can avoid it, and how can one undo or deal with this pattern in a live system one must work with!

    Read the article

  • Pros and cons of making database IDs consistent and "readable"

    - by gmale
    Question Is it a good rule of thumb for database IDs to be "meaningless?" Conversely, are there significant benefits from having IDs structured in a way where they can be recognized at a glance? What are the pros and cons? Background I just had a debate with my coworkers about the consistency of the IDs in our database. We have a data-driven application that leverages spring so that we rarely ever have to change code. That means, if there's a problem, a data change is usually the solution. My argument was that by making IDs consistent and readable, we save ourselves significant time and headaches, long term. Once the IDs are set, they don't have to change often and if done right, future changes won't be difficult. My coworkers position was that IDs should never matter. Encoding information into the ID violates DB design policies and keeping them orderly requires extra work that, "we don't have time for." I can't find anything online to support either position. So I'm turning to all the gurus here at SA! Example Imagine this simplified list of database records representing food in a grocery store, the first set represents data that has meaning encoded in the IDs, while the second does not: ID's with meaning: Type 1 Fruit 2 Veggie Product 101 Apple 102 Banana 103 Orange 201 Lettuce 202 Onion 203 Carrot Location 41 Aisle four top shelf 42 Aisle four bottom shelf 51 Aisle five top shelf 52 Aisle five bottom shelf ProductLocation 10141 Apple on aisle four top shelf 10241 Banana on aisle four top shelf //just by reading the ids, it's easy to recongnize that these are both Fruit on Aisle 4 ID's without meaning: Type 1 Fruit 2 Veggie Product 1 Apple 2 Banana 3 Orange 4 Lettuce 5 Onion 6 Carrot Location 1 Aisle four top shelf 2 Aisle four bottom shelf 3 Aisle five top shelf 4 Aisle five bottom shelf ProductLocation 1 Apple on aisle four top shelf 2 Banana on aisle four top shelf //given the IDs, it's harder to see that these are both fruit on aisle 4 Summary What are the pros and cons of keeping IDs readable and consistent? Which approach do you generally prefer and why? Is there an accepted industry best-practice?

    Read the article

  • Pattern for UI configuration

    - by TERACytE
    I have a Win32 C++ program that validates user input and updates the UI with status information and options. Currently it is written like this: void ShowError() { SetIcon(kError); SetMessageString("There was an error"); HideButton(kButton1); HideButton(kButton2); ShowButton(kButton3); } void ShowSuccess() { SetIcon(kError); std::String statusText (GetStatusText()); SetMessageString(statusText); HideButton(kButton1); HideButton(kButton2); ShowButton(kButton3); } // plus several more methods to update the UI using similar mechanisms I do not likes this because it duplicates code and causes me to update several methods if something changes in the UI. I am wondering if there is a design pattern or best practice to remove the duplication and make the functionality easier to understand and update. I could consolidate the code inside a config function and pass in flags to enable/disable UI items, but I am not convinced this is the best approach. Any suggestions and ideas?

    Read the article

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

  • When is factory method better than simple factory and vice versa?

    - by Bruce
    Hi all Working my way through the Head First Design Patterns book. I believe I understand the simple factory and the factory method, but I'm having trouble seeing what advantages factory method brings over simple factory. If an object A uses a simple factory to create its B objects, then clients can create it like this: A a = new A(new BFactory()); whereas if an object uses a factory method, a client can create it like this: A a = new ConcreteA(); // ConcreteA contains a method for instantiating the same Bs that the BFactory above creates, with the method hardwired into the subclass of A, ConcreteA. So in the case of the simple factory, clients compose A with a B factory, whereas with the factory method, the client chooses the appropriate subclass for the types of B it wants. There really doesn't seem to be much to choose between them. Either you have to choose which BFactory you want to compose A with, or you have to choose the right subclass of A to give you the Bs. Under what circumstances is one better than the other? Thanks all!

    Read the article

  • Why does C# not provide the C++ style 'friend' keyword?

    - by Ash
    The C++ friend keyword allows a class A to designate class B as it's friend. This allows Class B to access the private/protected members of class A. I've never read anything as to why this was left out of C# (and VB.NET). Most answers to this earlier StackOverflow question seem to be saying it is a useful part of C++ and there are good reasons to use it. In my experience I'd have to agree. Another question seems to me to be really asking how to do something similar to friend in a C# application. While the answers generally revolve around nested classes, it doesn't seem quite as elegant as using the friend keyword. The original Design Patterns book uses the friend keyword regularly throughout its examples. So in summary, why is friend missing from C#, and what is the "best practice" way (or ways) of simulating it in C#? (By the way, the "internal" keyword is not the same thing, it allows ALL classes within the entire assembly to access internal members, friend allows you to give access to a class to just one other class.)

    Read the article

  • DDD: Enum like entities

    - by Chris
    Hi all, I have the following DB model: **Person table** ID | Name | StateId ------------------------------ 1 Joe 1 2 Peter 1 3 John 2 **State table** ID | Desc ------------------------------ 1 Working 2 Vacation and domain model would be (simplified): public class Person { public int Id { get; } public string Name { get; set; } public State State { get; set; } } public class State { private int id; public string Name { get; set; } } The state might be used in the domain logic e.g.: if(person.State == State.Working) // some logic So from my understanding, the State acts like a value object which is used for domain logic checks. But it also needs to be present in the DB model to represent a clean ERM. So state might be extended to: public class State { private int id; public string Name { get; set; } public static State New {get {return new State([hardCodedIdHere?], [hardCodeNameHere?]);}} } But using this approach the name of the state would be hardcoded into the domain. Do you know what I mean? Is there a standard approach for such a thing? From my point of view what I am trying to do is using an object (which is persisted from the ERM design perspective) as a sort of value object within my domain. What do you think? Question update: Probably my question wasn't clear enough. What I need to know is, how I would use an entity (like the State example) that is stored in a database within my domain logic. To avoid things like: if(person.State.Id == State.Working.Id) // some logic or if(person.State.Id == WORKING_ID) // some logic

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Question About Example In Robert C Martin's _Clean Code_

    - by Jonah
    This is a question about the concept of a function doing only one thing. It won't make sense without some relevant passages for context, so I'll quote them here. They appear on pgs 37-38: To say this differently, we want to be able to read the program as though it were a set of TO paragraphs, each of which is describing the current level of abstraction and referencing subsequent TO paragraphs at the next level down. To include the setups and teardowns, we include setups, then we include the test page content, and then we include the teardowns. To include the setups, we include the suite setup if this is a suite, then we include the regular setup. It turns out to be very dif?cult for programmers to learn to follow this rule and write functions that stay at a single level of abstraction. But learning this trick is also very important. It is the key to keeping functions short and making sure they do “one thing.” Making the code read like a top-down set of TO paragraphs is an effective technique for keeping the abstraction level consistent. He then gives the following example of poor code: public Money calculatePay(Employee e) throws InvalidEmployeeType { switch (e.type) { case COMMISSIONED: return calculateCommissionedPay(e); case HOURLY: return calculateHourlyPay(e); case SALARIED: return calculateSalariedPay(e); default: throw new InvalidEmployeeType(e.type); } } and explains the problems with it as follows: There are several problems with this function. First, it’s large, and when new employee types are added, it will grow. Second, it very clearly does more than one thing. Third, it violates the Single Responsibility Principle7 (SRP) because there is more than one reason for it to change. Fourth, it violates the Open Closed Principle8 (OCP) because it must change whenever new types are added. Now my questions. To begin, it's clear to me how it violates the OCP, and it's clear to me that this alone makes it poor design. However, I am trying to understand each principle, and it's not clear to me how SRP applies. Specifically, the only reason I can imagine for this method to change is the addition of new employee types. There is only one "axis of change." If details of the calculation needed to change, this would only affect the submethods like "calculateHourlyPay()" Also, while in one sense it is obviously doing 3 things, those three things are all at the same level of abstraction, and can all be put into a TO paragraph no different from the example one: TO calculate pay for an employee, we calculate commissioned pay if the employee is commissioned, hourly pay if he is hourly, etc. So aside from its violation of the OCP, this code seems to conform to Martin's other requirements of clean code, even though he's arguing it does not. Can someone please explain what I am missing? Thanks.

    Read the article

  • Patterns for dynamic CMS components (event driven?)

    - by CitrusTree
    Sorry my title is not great, this is my first real punt at moving 100% to OO as I've been procedural for more years than I can remember. I'm finding it hard to understand if what I'm trying to do is possible. Depending on people's thoughts on the 2 following points, I'll go down that route. The CMS I'm putting together is quote small, however focuses very much on different types of content. I could easily use Drupal which I'm very comfortable with, but I want to give myself a really good reasons to move myself into design patterns / OO-PHP 1) I have created a base 'content' class which I wish to be able to extend to handle different types of content. The base class, for example, handles HTML content, and extensions might handle XML or PDF output instead. On the other hand, at some point I may wish to extend the base class for a given project completely. I.e. if class 'content-v2' extended class 'content' for that site, any calls to that class should actually call 'content-v2' instead. Is that possible? If the code instantiates an object of type 'content' - I actually want it to instantiate one of type 'content-v2'... I can see how to do it using inheritance, but that appears to involve referring to the class explicitly, I can't see how to link the class I want it to use instead dynamically. 2) Secondly, the way I'm building this at the moment is horrible, I'm not happy with it. It feels very linear indeed - i.e. get session details get content build navigation theme page publish. To do this all the objects are called 1-by-1 which is all very static. I'd like it to be more dynamic so that I can add to it at a later date (very closely related to first question). Is there a way that instead of my orchestrator class calling all the other classes 1-by-1, then building the whole thing up at the end, that instead each of the other classes can 'listen' for specific events, then at the applicable point jump in and do their but? That way the orchestrator class would not need to know what other classes were required, and call them 1-by-1. Sorry if I've got this all twisted in my head. I'm trying to build this so it's really flexible.

    Read the article

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • The Endeca UI Design Pattern Library Returns

    - by Joe Lamantia
    I'm happy to announce that the Endeca UI Design Pattern Library - now titled the Endeca Discovery Pattern Library - is once again providing guidance and good practices on the design of discovery experiences.  Launched publicly in 2010 following several years of internal development and usage, the Endeca Pattern Library is a unique and valued source of industry-leading perspective on discovery - something I've come to appreciate directly through  fielding the consistent stream of inquiries about the library's status, and requests for its rapid return to public availability. Restoring the library as a public resource is only the first step!  For the next stage of the library's evolution, we plan to increase the scope of the guidance it offers beyond user interface design to the broader topic of discovery.  This could include patterns for architecture at the systems, user experience, and business levels; information and process models; analytical method and activity patterns for conducting discovery; and organizational and resource patterns for provisioning discovery capability in different settings.  We'd like guidance from the community on the kinds of patterns that are most valuable - so make sure to let us know. And we're also considering ways to increase the number of patterns the library offers, possibly by expanding the set of contributors and the authoring mechanisms. If you'd like to contribute, please get in touch. Here's the new address of the library: http://www.oracle.com/goto/EndecaDiscoveryPatterns And I should say 'Many thanks' to the UXDirect team and all the others within the Oracle family who helped - literally - keep the library alive, and restore it as a public resource.

    Read the article

  • Software design of a browser-based strategic MMO game

    - by Mehran
    I wonder if there are any known tested software designs for Travian-like browser-based strategic MMO games? I mean how would they implement the server of such games or what is stored in database and what is stored in RAM? Is the state of the world stored in one piece or is it distributed among a number of storage? Does anyone know a resource to study the problems and solutions of creating such games? [UPDATE] Suggested in comments, I'm going to give an example how would I design such a project. Even though I'm not sure if I'm proposing the right one. Having stored the world state in a MongoDB, I would implement an event collection in which all the changes to the world will register. Changes that are meant to happen in the future will come with an action date set to the future and those that are to be carried out immediately will be set to now. Having this datastore as the central point of the system, players will issue their actions as events inserted in datastore. At the other end of the system, I'll have a constant-running software taking out events out of the datastore which are due to be carried out and not done yet. Executing an event means apply some update on the world's state and thus the datastore. As scalable as this design sounds, I'm not sure if it will be worth implementing. For one, it is pointless to cache the datastore as most of updates happen once without any follow ups. For instance if you have the growth of resources in your game, you'll be updating the whole world state periodically in which case, having incorporated a cache, you are keeping the whole world in RAM (which most likely is impossible). So can someone come up with a better design?

    Read the article

  • Good design for a simple site that contains a blog

    - by bporter
    What is a good design for a simple web site with mostly static pages and a blog? I am helping a friend build this for their small business. We are looking for a simple approach that can be implemented fairly quickly. (I am a programmer and can help with coding, hosting, etc.) One option is to use a site like virb, which lets you choose from one of their themes and build a site pretty easily. You can also include a blog. They host the site for a pretty low monthly rate. I recommended this option, but my friend wants a design that is unique and custom. So, I took one of the themes and started modifying the HTML and CSS. This might still be a good option, but... ...If we are going to greatly modify it, why not just create the static pages from scratch and use something like Wordpress for the blog. Is this a good option? It looks fairly easy to integrate Wordpress with a site so that the design and behavior are really cohesive. Is this a good idea? Do you recommend any other approaches?

    Read the article

  • Plug-in based software design

    - by gekod
    I'm a software developer who is willing to improve his software design skills. I think software should not only work but have a solid and elegant design too to be reusable and extensive to later purposes. Now I'm in search of some help figuring out good practices for specific problems. Actually, I'm trying to find out how to design a piece of software that would be extensible via plug-ins. The questions I have are the following: Should plug-ins be able to access each others functionality? This would bring dependencies I guess. What should the main application offer to the plug-ins if I want to let's say develop a multimedia software which would play videos and music but which could later on get functionality added over plug-ins that would let's say be able to read video status (time, bps, ...) and display it. Would this mean that the player itself would have to be part of the main program and offer services to plug-ins to get such information or would there be a way to develop the player as a plug-in also but offer in some way the possibility to other plug-ins to interact with it? As you see, my questions are mainly for learning purposes as I strive to improve my software development skills. I hope to find help here and apologize if something is wrong with my question.

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • A fix for the design time error in MVVM Light V4.1

    - by Laurent Bugnion
    For those of you who installed V4.1 of MVVM Light and created a project for Windows Phone 8, you will have noticed an error showing up in the design surface (either in Visual Studio designer, or in Expression Blend). The error says: “Could not load type ‘System.ComponentModel.INotifyPropertyChanging’ from assembly ‘mscorlib.extensions’” with additional information about version numbers. The error is caused by an incompatibility between versions of System.Windows.Interactivity. Because this assembly is strongly named, any version incompatibility is causing the kind of error shown here (for an interesting discussion on the strong naming issue, see this thread on Codeplex). I managed to resolve the issue for Windows Phone 8 and will publish a cleaned up installer next week. In the mean time, in order to allow you to continue development, please follow the steps: Download the new DLLs zip package (MVVMLight_V4_1_25_WP8). Right click on the Zip file and select Properties from the context menu. Press the “Unblock” button (if available) and then OK. Right click again on the zip package and select “Extract all…”. Select a known location for the new DLLs. Open the MVVM Light project with the design time error in Visual Studio 2012. Open the References folder in the Solution Explorer. Select the following DLLs: GalaSoft.MvvmLight.dll, GalaSoft.MvvmLight.Extras.dll, Microsoft.Practices.ServiceLocation.dll and System.Windows.Interactivity.dll. Press “delete” and confirm to remove the DLLs from your project. Right click on References and select Add Reference from the context menu. Browse to the folder with the new DLLs. Select the four new DLLs and press OK. Rebuild your application, and open it again in Blend or in the Visual Studio designer. The error should be gone now. In the next few days, as time allows, I will publish a new MSI containing a fixed version of the DLLs as well as a few other improvements. This quick fix should however allow you to continue working on your Windows Phone 8 projects in design mode too.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >