Search Results

Search found 24245 results on 970 pages for 'date model'.

Page 132/970 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • Aggregating a list of dates to start and end date

    - by Joe Mako
    I have a list of dates and IDs, and I would like to roll them up into periods of consucitutive dates, within each ID. For a table with the columns "testid" and "pulldate" in a table called "data": | A79 | 2010-06-02 | | A79 | 2010-06-03 | | A79 | 2010-06-04 | | B72 | 2010-04-22 | | B72 | 2010-06-03 | | B72 | 2010-06-04 | | C94 | 2010-04-09 | | C94 | 2010-04-10 | | C94 | 2010-04-11 | | C94 | 2010-04-12 | | C94 | 2010-04-13 | | C94 | 2010-04-14 | | C94 | 2010-06-02 | | C94 | 2010-06-03 | | C94 | 2010-06-04 | I want to generate a table with the columns "testid", "group", "start_date", "end_date": | A79 | 1 | 2010-06-02 | 2010-06-04 | | B72 | 2 | 2010-04-22 | 2010-04-22 | | B72 | 3 | 2010-06-03 | 2010-06-04 | | C94 | 4 | 2010-04-09 | 2010-04-14 | | C94 | 5 | 2010-06-02 | 2010-06-04 | This is the the code I came up with: SELECT t2.testid, t2.group, MIN(t2.pulldate) AS start_date, MAX(t2.pulldate) AS end_date FROM(SELECT t1.pulldate, t1.testid, SUM(t1.check) OVER (ORDER BY t1.testid,t1.pulldate) AS group FROM(SELECT data.pulldate, data.testid, CASE WHEN data.testid=LAG(data.testid,1) OVER (ORDER BY data.testid,data.pulldate) AND data.pulldate=date (LAG(data.pulldate,1) OVER (PARTITION BY data.testid ORDER BY data.pulldate)) + integer '1' THEN 0 ELSE 1 END AS check FROM data ORDER BY data.testid, data.pulldate) AS t1) AS t2 GROUP BY t2.testid,t2.group ORDER BY t2.group; I use the use the LAG windowing function to compare each row to the previous, putting a 1 if I need to increment to start a new group, I then do a running sum of that column, and then aggregate to the combinations of "group" and "testid". Is there a better way to accomplish my goal, or does this operation have a name? I am using PostgreSQL 8.4

    Read the article

  • Nested model form with collection in Rails 2.3

    - by kristian nissen
    How can I make this work in Rails 2.3? class Magazine < ActiveRecord::Base has_many :magazinepages end class Magazinepage < ActiveRecord::Base belongs_to :magazine end and then in the controller: def new @magazine = Magazine.new @magazinepages = @magazine.magazinepages.build end and then the form: <% form_for(@magazine) do |f| %> <%= error_messages_for :magazine %> <%= error_messages_for :magazinepages %> <fieldset> <legend><%= t('new_magazine') %></legend> <p> <%= f.label :title %> <%= f.text_field :title %> </p> <fieldset> <legend><%= t('new_magazine_pages') %> <% f.fields_for :magazinepages do |p| %> <p> <%= p.label :name %> <%= p.text_field :name %> </p> <p> <%= p.file_field :filepath %> </p> <% end %> </fieldset> <p> <%= f.submit :save %> </p> </fieldset> <% end %> problem is, if I want to submit a collection of magazinepages, activerecord complaints because it's expected a model and not an array. create action: def create @magazine = Magazine.new params[:magazine] @magazine.save ? redirect_to(@magazine) : render(:action => 'new') end

    Read the article

  • Passing session between jsf backing bean and model

    - by Rachel
    Background : I am having backing bean which has upload method that listen when file is uploaded. Now I pass this file to parser and in parser am doing validation check for row present in csv file. If validation fails, I have to log information and saving in logging table in database. My end goal : Is to get session information in logging bean so that I can get initialContext and make call to ejb to save data to database. What is happening : In my upload backing bean, am getting session but when i call parser, I do not pass session information as I do not want parser to be dependent on session as I want to unit test parser individually. So in my parser, I do not have session information, from parser am making call to logging bean(just a bean with some ejb methods) but in this logging bean, i need session because i need to get initial context. Question Is there a way in JSF, that I can get the session in my logging bean that I have in my upload backing bean? I tried doing: FacesContext ctx = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) ctx.getExternalContext().getSession(false); but session value was null, more generic question would be : How can I get session information in model bean or other beans that are referenced from backing beans in which we have session? Do we have generic method in jsf using which we can access session information throughout JSF Application?

    Read the article

  • Sort string array by analysing date details in those strings

    - by Jason Evans
    I have a requirement for the project I'm working on right now which is proving a bit tricky for me. Basically I have to sort an array of items based on the Text property of those items: Here are my items: var answers = [], answer1 = { Id: 1, Text: '3-4 weeks ago' }, answer2 = { Id: 2, Text: '1-2 weeks ago' }, answer3 = { Id: 3, Text: '7-8 weeks ago' }, answer4 = { Id: 4, Text: '5-6 weeks ago' }, answer5 = { Id: 5, Text: '1-2 days ago' }, answer6 = { Id: 6, Text: 'More than 1 month ago' }; answers.push(answer1); answers.push(answer2); answers.push(answer3); answers.push(answer4); answers.push(answer5); answers.push(answer6); I need to analyse the Text property of each item so that, after the sorting, the array looks like this: answers[0] = { Id: 6, Text: 'More than 1 month ago' } answers[1] = { Id: 3, Text: '7-8 weeks ago' } answers[2] = { Id: 4, Text: '5-6 weeks ago' } answers[3] = { Id: 1, Text: '3-4 weeks ago' } answers[4] = { Id: 2, Text: '1-2 weeks ago' } answers[5] = { Id: 5, Text: '1-2 days ago' } The logic is that, the furthest away the date, the more high priority it is, so it should appear first in the array. So "1-2 days" is less of a priority then "7-8 weeks". So the logic is that, I need to extract the number values, and then the units (e.g. days, weeks) and somehow sort the array based on those details. Quite honestly I'm finding it very difficult to come up with a solution, and I'd appreciate any help.

    Read the article

  • negative values in integer programming model

    - by Lucia
    I'm new at using the glpk tool, and after writing a model for certain integer problem and running the solver (glpsol) i get negative values in some constraint that shouldn't be negative at all: No.Row name Activity Lower bound Upper bound 8 act[1] 0 -0 9 act[2] -3 -0 10 act[2] -2 -0 That constraint is defined like this: act{j in J}: sum{i in I} d[i,j] <= y[j]*m; where the sets and variables used are like this: param m, integer, 0; param n, integer, 0; set I := 1..m; set J := 1..n; var y{j in J}, binary; As the upper bound is negative, i think the problem may be in the y[j]*m parte, of the right side of the inequality.. perhaps something with the multiplication of binarys? or that the j in that side of the constrait is undefined? i dont know... i would be greatly grateful if someone can help me with this! :) and excuse for my bad english thanks in advance!

    Read the article

  • Passing arguments and conditions to model in codeigniter

    - by stormdrain
    I'm adding some models to a project, and was wondering if there is a "best practice" kind of approach to creating models: Does it make sense to create a function for each specific query? I was starting to do this, then had the idea of creating a generic function that I could pass parameters to. e.g: Instead of function getClients(){ return $this->db->query('SELECT client_id,last FROM Names ORDER BY id DESC'); } function getClientNames($clid){ return $this->db->query('SELECT * FROM Names WHERE client_id = '.$clid); } function getClientName($nameID){ return $this->db->query('SELECT * FROM Names WHERE id ='.$nameID); } } Something like function getNameData($args,$cond){ if($cond==''){ $q=$this->db->query('SELECT '.$args.' FROM Names'); return $q; }else{ $q=$this->db->query('SELECT '.$args.' FROM Names WHERE '.$cond); return $q; } } where I can pass the fields and conditions (if applicable) to the model. Is there a reason the latter example would be a bad idea? Thanks!

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • make target is never determined up to date

    - by Michael
    Cygwin make always processing $(chrome_jar_file) target, after first successful build. So I never get up to date message and always see commands for $(chrome_jar_file) are executing. However it happens only on Windows 7. On Windows XP once it built and intact, no more builds. I narrowed down the issue to one prerequisite - $(jar_target_dir). Here is part of the code # The location where the JAR file will be created. jar_target_dir := $(build_dir)/chrome # The main chrome JAR file. chrome_jar_file := $(jar_target_dir)/$(extension_name).jar # The root of the JAR sources. jar_source_root := chrome # The sources for the JAR file. jar_sources := bla #... some files, doesn't matter jar_sources_no_dir := $(subst $(jar_source_root)/,,$(jar_sources)) $(chrome_jar_file): $(jar_sources) $(jar_target_dir) @echo "Creating chrome JAR file." @cd $(jar_source_root); $(ZIP) ../$(chrome_jar_file) $(jar_sources_no_dir) @echo "Creating chrome JAR file. Done!" $(jar_target_dir): $(build_dir) echo "Creating jar target dir..." if [ ! -x $(jar_target_dir) ]; \ then \ mkdir $(jar_target_dir); \ fi $(build_dir): @if [ ! -x $(build_dir) ]; \ then \ mkdir $(build_dir); \ fi so if I just remove $(jar_target_dir) from $(chrome_jar_file) rule, it works fine.

    Read the article

  • How to use the Zend_Log instance that was created using the Zend_Application_Resource_Log in a model

    - by Alex
    Our Zend_Log is initialized by only adding the following lines to application.ini resources.log.stream.writerName = "Stream" resources.log.stream.writerParams.mode = "a" So Zend_Application_Resource_Log will create the instance for us. We are already able to access this instance in controllers via the following: public function getLog() { $bootstrap = $this->getInvokeArg('bootstrap'); //if (is_null($bootstrap)) return false; if (!$bootstrap->hasPluginResource('Log')) { return false; } $log = $bootstrap->getResource('Log'); return $log; } So far, so good. Now we want to use the same log instance in model classes, where we can not access the bootstrap. Our first idea was to register the very same Log instance in Zend_Registry to be able to use Zend_Registry::get('Zend_Log') everywhere we want: in our Bootstrap class: protected function _initLog() { if (!$this->hasPluginResource('Log')) { throw new Zend_Exception('Log not enabled'); } $log = $this->getResource('Log'); assert( $log != null); Zend_Registry::set('Zend_Log', $log); } Unfortunately this assertion fails == $log IS NULL --- but why?? It is clear that we could just initialize the Zend_Log manually during bootstrapping without using the automatism of Zend_Application_Resource_Log, so this kind of answers will not be accepted.

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • ASP.NET MVC ModelCopier

    - by shiju
     In my earlier post ViewModel patten and AutoMapper in ASP.NET MVC application, We have discussed the need for  View Model objects and how to map values between View Model objects and Domain model objects using AutoMapper. ASP.NET MVC futures assembly provides a static class ModelCopier that can also use for copying values between View Model objects and Domain model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method would copy values between two collection objects and CopyModel would copy values between two model objects. <PRE class="c#" name="code"> var expense=new Expense(); ModelCopier.CopyModel(expenseViewModel, expense);</PRE>The above code copying values from expenseViewModel object to  expense object.                For simple mapping between model objects, you can use ModelCopier but for complex scenarios, I highly recommending to using AutoMapper for mapping between model objects.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Task-It Webinar - Source Code

    Last week I presented a webinar called "Building a real-world application with RadControls for Silverlight 4". For those that didn't get to see the webinar, you can view it here: Building a read-world application with RadControls for Silverlight 4 Since the webinar I've received several requests asking if I could post the source code for the simple application I showed demonstrating some of the techniques used in the development of Task-It, such as MVVM, Commands and Internationalization. This source code is now available for downloadhere. After downloading the source: Extract it to the location of your choice on your hard-drive Open the solution Right-click ModuleProject.Web and selecte 'Set as StartUp Project'. Right-click ProjectTestPage.aspx and selected 'Set as Start Page' Create a database in SQL Server called WebinarProject. Navigate to the Database folder under the WebinarProject directory and run the .sql script against your WebinarProject database. The last two steps are necessary only for the Tasks page to work properly (using WCF RIA Services). Now some notes about each page: Code-behind This is not the way I recommend coding a line-of-business application in Silverlight, but simply wanted to show how the code-behind approach would look. Command This page introduces MVVM and Commands. You'll notice in the XAML that the Command property of theRadMenuItem and the Button are both bound to a SaveCommand. That comes from the view model. If you look in the code- behind of the user control you'll see that an instance of a CommandViewModel is instantiated and set as the DataContext of the UserControl.There is also a listener for the view model's SaveCompleted event. When this is fired, it tells the view (UserControl) to display the MessageBox. Internationalization This sample is similar to the previous one, but instead of using hard-coded strings in the UI, the strings are obtained via binding toview model properties. The view model gets the strings from the .resx files (Strings.resx or Strings.de.resx) under Assets/Resources. If you uncomment the call to ShowGerman() in App.xaml.cs's Application_Startup method and re-run the application, you will see the UI in German. Note that this code, which sets the CurrentCulture and CurrentUICulture on the current thread to "de" (German) is for testing purposes only. RadWindow Once again, very similar to the previous example.The difference is that we are now using a RadWindow to display the 'Saved' message instead of a MessageBox. The advantage here is that we do not have to hold on to a reference to the view model in our code behind so that we can get the 'Saved' message from it. The RadWindow's DataContext is now also bound to the view model, so within its XAML we can bind directly to properties in the view model. Much nicer, and cleaner. One other thing I introduced in this example is the use of spacer Rectangles. Rather than setting a width and/or height on the rectangles for spacing, I am now referencing a style in my ResourceDictionary called StandardSpacerStyle. I like doing this better than using margins or padding because now I have a reusable way to create space between elements, the Rectangle does not show (because I have not set its Fill color), and I can change my spacing throughout the user interface in one place if I'd like. Tasks This page is quite a bit different than the other four. It is a very simple, stripped-down version of the Tasks page in the Task-It application. The Tasks.xaml UserControl has a ContentControl, and the Content of that control is set based on whether we are looking at the list of tasks or editing a task. So it displays one of two child UserControls, which are called List and Details. List has the RadGridView, Details has the form. In the code-behind of the Tasks UserControl I am once again setting its DataContext to a view model class. The nice thing is, whichever child UserControl is being displayed (List or Details) inherits its DataContext from its parent control (Tasks), so I do not have to explicitly set it. The List UserControl simply displays a RadGridView whose ItemsSource is bound to a property in the view model called Tasks, and its SelectedItem property is bound to a property in the view model called SelectedItem. The SelectedItem binding must be TwoWay so that the view is notified when the SelectedItem changes in the view model, and the view model is notified when something changes in the view (like when a user changes the Name and/or DueDate in the form). You'll also notice that the form's TextBox and RadDatePicker are also TwoWay bound to the SelectedItem property in the view model. You can experiment with the binding by removing TwoWay and see how changes in the form do not show up in the RadGridView. So here we have an example of two different views (List and Details) that are both bound to the same view model...and actually, so is the Tasks UserControl, so it is really three views. WCF RIA Services By the way, I am using WCF RIA Services to retrieve data for the RadGridView and save the data when the user clicks the Save button in the form. I created a really simple ADO.NET Entity Data Model in WebinarProject.Web called DataModel.edmx. I also created a simple Domain Data Service called DataService that has methods for retrieving data, inserting, updating and deleting. However I am only using the retrieval and update methods in this sample. Note that I do not currently have any validation in place on the form, as I wanted to keep the sample as simple as possible. Wrap up Technically, I should move the calls to WCF RIA Services out of the view model and put them into a separate layer, but this works for now, and that is a topic for another day! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Implementing separation of concerns via MVC

    - by user2368481
    I'm creating a question to see if my understanding of MVC separation is correct, I haven't been able to find a clear answer anywhere online. So is this the right way to implement it (in Java): I would have 3 .java files, one each for Model, Controller, View. I would put all the classes related to Model in the Model.java like so: //Model.java { public class Model //class fields public Model(); public ModelClassA(); public ModelClassB(); public ModelClassC(); } With the ModelClasses being any class that I consider belonging to the Model. Is it correct to have the classes within the Model Class, as I have read that nested classes should be avoided where possible.

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • Can DrawIndexedPrimitives() be used for drawing a loaded model mesh-wise?

    - by Afzal
    I am using DrawIndexedPrimitives() for drawing a loaded 3D model by drawing each mesh part, but this process makes my application very slow. This is perhaps because of a very large number of vertex/index buffer data created in video memory. That is why I am looking for a way to use the same method for each model mesh instead. The problem is that I don't know how I will set the textures of that mesh. Can anyone offer me some guidance?

    Read the article

  • How can I make the date/time applet display on a single line?

    - by EmmyS
    I just updated from Lucid to Natty (thought it was going to be Maverick, but my About Ubuntu menu shows that it is Natty, which "was released in April 2011" - who knew the developers had mastered time travel?!) In any case, the default date/time applet in my gnome panel is now displaying on two lines (date on top of time) instead of one line like it used to. Any way to get it back on one line? I've tried the instructions shown here, but it doesn't seem to make a difference.

    Read the article

  • Journée consacrée aux approches "Model Driven" à Paris le 24 novembre, participation gratuite en présence d'experts de renom

    Journée consacrée aux approches "Model Driven" à Paris le 24 novembre Participation gratuite à une rencontre d'échange en présence d'experts de renom SoftFluent organise en compagnie d'autres sociétés spécialisées en développement logiciel la journée du Model Driven le 24 novembre 2011 à Paris - coeur défense. En présence d'experts reconnus du domaine, les visiteurs assisteront à des discussions et conférences autour des approches de fabrication de logiciels pilotées par les modèles : adhésion des utilisateurs, agilité, performances des applications, qualité du code produit, modernisation du SI, etc. À travers des conférences, démonstrations et retours d'expérienc...

    Read the article

  • Are all View Models supposed to be accessed through the Main View Model in MVVM?

    - by chustar
    I am currently working on a WP8 application. My current design is to have each view bind against a specific view model directly. Looking through the samples though, it seems that another way is to have all the view models accessed through the Main View Model and then have all the views to their view models through the MVM. Is this the correct way to do it (So that it doesn't cause flexibility and other issues in the future)?

    Read the article

  • How to model a system to help my team grasp the project's bigger picture? [closed]

    - by user1796528
    According to the software engineering point of view, I should model the system to make it easier for other people to understand well what they work on. To do so, I have used the Dia drawing program. But, after having used Dia for some time, I find that it falls short in helping me to correctly and efficiently model my project. How do you usually tackle this problem (modelling a project in the large) and what tools would you recommend for the job, and why?

    Read the article

  • APress Deal of the Day 20/Dec/2010 - Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008

    - by TATWORTH
    Todays $10 bargain PDF from Apress is: Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008 Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. $49.99 | Published Jul 2010 |

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >