Search Results

Search found 5157 results on 207 pages for 'logic gates'.

Page 25/207 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Ruby on Rails - where to write business logic while processing a request? (newbie)

    - by Genadinik
    I am learning Ruby on Rails. I made a simple link like this: <%= link_to "Alex Link", alexes_path(@alex) %> then I routed it in routes.rb like this: resources :alexes get "home/index" then I am a bit unclear, but I think it goes to this part of the controller: def index #@alexes = Alex.all respond_to do |format| format.html # index.html.erb format.json { render json: @alexes } end end Am I correct that it goes to this part of the controller? Then nothing much happens and it goes to the next page which is index.html.rb under views\alexes So what I am wondering is - if I needed to do some business logic, would I write that in the controller snippet? Where inside the snippet? An example would be nice to take a look. Also, I would like to connect to a MongoDb database. Would I also write that in the middle of the controller? Thanks!

    Read the article

  • Nginx logic (if cookie set, redirect here...) Is it possible?

    - by Matthew Steiner
    So, I have a pretty basic need, but I can't figure out if it's even possible, much less how to do it. I have a main page that anyone can see. Most of the rest of the application can be seen only if logged in (hence, a "set cookie"). So I was thinking, as long as they don't have a cookie set, they can just see a cached version of nginx. I can get it caching with this: proxy_cache STATIC; proxy_cache_valid 200 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; And it helps a ton. (instead of 15 requests per second it gets over 1000). Now I just need some sort of "server logic" to say only serve the cached page if they have no cookie, otherwise, load the dynamic page (which will automatically redirect them into the app). Any ideas?

    Read the article

  • Is problem solving of puzzles/logic tests a skill that can be developed with practise or only someth

    - by dotnetdev
    Programming is essentially problem solving/using a lot of logic. With solving puzzles (like the ones recruiters like MS etc ask), is this a skill that can be developed with practise or is it a skill that only someone who is gifted has (I assume the former as many people can pass these tests)? Even so, I keep thinking it is a special skill for someone gifted, not for someone with a lot of practise. I guess that with practise you are perhaps more open-minded and start to think out of the box more (solving technical problems in development may also foster this mindset perhaps). Thanks

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • How should I do this (business logic) in Sql Server? A constraint?

    - by Pure.Krome
    Hi folks, I wish to add some type of business logic constraint to a table, but not sure how / where. I have a table with the following fields. ID INTEGER IDENTITY HubId INTEGER CategoryId INTEGER IsFeatured BIT Foo NVARCHAR(200) etc. So what i wish is that you can only have one featured thingy, per articleId + hubId. eg. 1, 1, 1, 1, 'blah' -- Ok. 2, 1, 2, 1, 'more blah' -- Also Ok 3, 1, 1, 1, 'aaa' -- constraint error 4, 1, 1, 0, 'asdasdad' -- Ok. 5, 1, 1, 0, 'bbbb' -- Ok. etc. so the third row to be inserterd would fail because that hub AND category already have a featured thingy. Is this possible?

    Read the article

  • If cookie found, get data, else create cookie, is this good logic?

    - by Ryan
    I have an Action that basically adds an item to a cart, the only way the cart is known is by checking the cookie, here is the flow of logic, please let me know if you see any issue... /order/add/[id] is called via GET action checks for cookie, if no cookie found, it makes a new cart, writes the identifier to the cookie, and adds the item to the database with a relation to the cart created if cookie is found, it gets the cart identifier from the cookie, gets the cart object, adds the item to the database with a relation to the cart found so it's basically like... action add(int id){ if(cookie is there) cart = getcart(cookievalue) else cart = makecart() createcookie(cart.id) additemtocart(cart.id, id) return "success"; } Seem right? I can't really thing of another way that would make sense.

    Read the article

  • Where ORMs blur the lines between code and data, how do you decide what logic should be a stored procedure, and what should be coded?

    - by PhonicUK
    Take the following pseudocode: CreateInvoiceAndCalculate(ItemsAndQuantities, DispatchAddress, User); And say CreateInvoice does the following: Create a new entry in an Invoices table belonging to the specified User to be sent to the given DispatchAddress. Create a new entry in an InvoiceItems table for each of the items in ItemsAndQuantities, storing the Item, the Quantity, and the cost of the item as of now (by looking it up from an Items table) Calculate the total amount of the invoice (ex shipping and taxes) and store it in the new Invoice row. At a glace you wouldn't be able to tell if this was a method in my applications code, or a stored procedure in the database that is being exposed as a function by the ORM. And to some extent it doesn't really matter. Now technically none of this is business logic. You're not making any decisions - just performing a calculation and creating records. However some may argue that because you are performing a calculation that affects the business (the total amount to be invoiced) that this isn't something that should be done in a stored procedure and instead should be in code. So for this specific example - why would it be more appropriate to do one or the other? And where do you draw the line? Or does it even particular matter as long as it's sufficiently well documented?

    Read the article

  • How is the PHP extensions/modules file structure logic based?

    - by dotpointer
    I'm trying to configure/build PHP 5.3.10 on Linux/Slackware 12 but the extensions appear in the wrong directory when I run make install. In the php.ini file is the extension dir defined: /usr/lib/php/extensions Problem is that when I run "make install" the newly built extensions are copied to a subfolder in extensions directory: /usr/lib/php/extensions/no-debug-non-zts-20090626 What am I supposed to do with this... copy the files down from the no-debug-non-zts-20090626 directory into the extensions directory, create symlinks from extensions to the modules in the no-debug-non-zts-20090626 directory (which will take a lot of time) or what? (I know I can do any of them, but I want to know the correct way...)

    Read the article

  • Incorrect logic flow? function that gets coordinates for a sudoku game

    - by igor
    This function of mine keeps on failing an autograder, I am trying to figure out if there is a problem with its logic flow? Any thoughts? Basically, if the row is wrong, "invalid row" should be printed, and clearInput(); called, and return false. When y is wrong, "invalid column" printed, and clearInput(); called and return false. When both are wrong, only "invalid row" is to be printed (and still clearInput and return false. Obviously when row and y are correct, print no error and return true. My function gets through most of the test cases, but fails towards the end, I'm a little lost as to why. bool getCoords(int & x, int & y) { char row; bool noError=true; cin>>row>>y; row=toupper(row); if(row>='A' && row<='I' && isalpha(row) && y>=1 && y<=9) { x=row-'A'; y=y-1; return true; } else if(!(row>='A' && row<='I')) { cout<<"Invalid row"<<endl; noError=false; clearInput(); return false; } else { if(noError) { cout<<"Invalid column"<<endl; } clearInput(); return false; } }

    Read the article

  • What category of combinatorial problems appear on the logic games section of the LSAT?

    - by Merjit
    There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF = AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.

    Read the article

  • Is unnecessary error handling recommended in business logic? eg. Null check/Percentage limit check etc

    - by novice_at_work
    We usually put unnecessary checks in our business logic to avoid failures. Eg. 1. public ObjectABC funcABC(){ ObjectABC obj = new ObjectABC; .......... .......... //its never set to null here. .......... return obj; } ObjectABC o = funABC(); if(o!=null){ //do something } Why do we need this null check if we are sure that it will never be null? Is it a good practice or not? 2. int pplReached = funA(..,..,..); int totalPpl = funB(..,..,..); funA() just puts a few more restriction over result of funB(). Double percentage = (totalPpl==0||totalPpl<pplReached) ? 0.0 : pplReached/totalPpl; Do we again need this check? The questions is: Aren't we swallowing some fundamental issue by putting such checks? Issues which should be shown ideally, are avoided by putting these checks. What is the recommended way?

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • Stuck on the logic of creating tags for posts (like SO tags)? (PHP)

    - by ggfan
    I am stuck on how to create tags for each post on my site. I am not sure how to add the tags into database. Currently... I have 3 tables: +---------------------+ +--------------------+ +---------------------+ | Tags | | Posting | | PostingTags | +---------------------+ +--------------------+ +---------------------+ | + TagID | | + posting_id | | + posting_id | +---------------------+ +--------------------+ +---------------------+ | + TagName | | + title | | + tagid | +---------------------+ +--------------------+ +---------------------+ The Tags table is just the name of the tags(ex: 1 PHP, 2 MySQL,3 HTML) The posting (ex: 1 What is PHP?, 2 What is CSS?, 3 What is HTML?) The postingtags shows the relation between posting and tags. When users type a posting, I insert the data into the "posting" table. It automatically inserts the posting_id for each post(posting_id is a primary key). $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $query4 = "INSERT INTO posting (title) VALUES ('$title')"; mysqli_query($dbc, $query4); HOWEVER, how do I insert the tags for each post? When users are filling out the form, there is a checkbox area for all the tags available and they check off whatever tags they want. (I am not doing where users type in the tags they want just yet) This shows each tag with a checkbox. When users check off each tag, it gets stored in an array called "postingtag[]". <label class="styled">Select Tags:</label> <?php $dbc = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); $query5 = "SELECT * FROM tags ORDER BY tagname"; $data5 = mysqli_query($dbc, $query5); while ($row5 = mysqli_fetch_array($data5)) { echo '<li><input type="checkbox" name="postingtag[]" value="'.$row5['tagname'].'" ">'.$row5['tagname'].'</li>'; } ?> My question is how do I insert the tags in the array ("postingtag") into my "postingtags" table? Should I... $postingtag = $_POST["postingtag"]; foreach($postingtag as $value){ $query5 = "INSERT INTO postingtags (posting_id, tagID) VALUES (____, $value)"; mysqli_query($dbc, $query5); } 1.In this query, how do I get the posting_id value of the post? I am stuck on the logic here, so if someone can help me explain the next step, I would appreciate it! Is there an easier way to insert tags?

    Read the article

  • Autodetect/mount SDCards and run script for them on Linux

    - by Brendan
    Hey Everyone, I'm currently running SME Server, and need to have a script run upon the attachment of SD Cards to my server. The script itself works fine (it copies the contents of the cards), but the automounting and execution of the script is where I'm having issues. The I have a USB hub consisting of 10 USB ports; that shows up as: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 (The hub is the TERMINUS TECHNOLOGY INC entries) As I cannot plug SD Cards directly into the server; I use a USB to SD card attachement (10 of them) plugged into the hub to read the cards. Upon pluggig the 10 attachments (without cards) into the hub; lsusb yields the following: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 073: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 072: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 071: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 070: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 069: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 068: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 067: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 066: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 065: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 064: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 As you can see, the readers are the "Gensys Logic, Inc" entries. Plugging in an SD card to a reader doesn't affect lsusb (it reads exactly as above), however my system recognises the cards fine; as indicated by dmesg: Attached scsi generic sg11 at scsi54, channel 0, id 0, lun 0, type 0 USB Mass Storage device found at 73 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 If I manually mount sdd1 (mount /dev/sdd1 /somedirectory/) this works fine. What I'm really after is a solution that automounts each of the cards as they are inputted into the reader; and executes a script for them (this will involve copying their contents to another directory). My problem is that I don't know how to do this; I don't think udev will work as the USB devices don't change; if I could somehow get udev working with /dev/disk/by-path/ however I think this is doable (it seems to keep constant entries). ls /dev/disk returns: pci-0000:00:1d.7-usb-0:4.1.1.1:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0 pci-0000:0b:01.0-scsi-0:0:1:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0-part2 From above, we can see I have only one card plugged into the reader (pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1). Going mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 Works and places the card under /media/usbdisk/, however: mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 slot1/ doesn't work, and returns "mount: can't get address for /dev/disk/by-path/pci-0000" Any ideas and solutions would be great, I've seen the knowledge of a lot of the guys on here before so I'm hopeful someone can help me out. Thanks

    Read the article

  • Why does working processors harder use more electrical power?

    - by GazTheDestroyer
    Back in the mists of time when I started coding, at least as far as I'm aware, processors all used a fixed amount of power. There was no such thing as a processor being "idle". These days there are all sorts of technologies for reducing power usage when the processor is not very busy, mostly by dynamically reducing the clock rate. My question is why does running at a lower clock rate use less power? My mental picture of a processor is of a reference voltage (say 5V) representing a binary 1, and 0V representing 0. Therefore I tend to think of of a constant 5V being applied across the entire chip, with the various logic gates disconnecting this voltage when "off", meaning a constant amount of power is being used. The rate at which these gates are turned on and off seems to have no relation to the power used. I have no doubt this is a hopelessly naive picture, but I am no electrical engineer. Can someone explain what's really going on with frequency scaling, and how it saves power. Are there any other ways that a processor uses more or less power depending on state? eg Does it use more power if more gates are open? How are mobile / low power processors different from their desktop cousins? Are they just simpler (less transistors?), or is there some other fundamental design difference?

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • How do I create midi data to trigger an Ultrabeat pattern in Logic?

    - by user289581
    I've made some Ultrabeat patterns but I can't figure out how to trigger them. I make the pattern on C-1, then I go back to the arrange window and hit record and then hit C1 on my keyboard, and it doesn't play the pattern, it just plays a single note. I have Pattern Mode enabled on Ultrabeat and I'm using a one-shot trigger but I can't seem to trigger my pattern to play at all. What am I doing wrong?

    Read the article

  • Unity The parameter host could not be resolved when attempting to call constructor

    - by Terrance
    When I attempt to instantiate my instance of the base class I get the error: a ResolutionFailedException with roughly the following error "The parameter host could not be resolved when attempting to call constructor" I'm currently not using an Interface for the base type and my instance of the class is inheriting the base type class. I'm new to Unity and DI so I'm thinking its something I forgot possibly. ExeConfigurationFileMap map = new ExeConfigurationFileMap(); map.ExeConfigFilename = "Unity.Config"; Configuration config = ConfigurationManager.OpenMappedExeConfiguration(map, ConfigurationUserLevel.None); UnityConfigurationSection section = (UnityConfigurationSection)config.GetSection("unity"); IUnityContainer container = new UnityContainer(); section.Containers.Default.Configure(container); //Throws exception here on this BaseCalculatorServer server = container.Resolve<BaseCalculatorServer>(); and the Unity.Config file <container> <types> <type name ="CalculatorServer" type="Calculator.Logic.BaseCalculatorServer, Calculator.Logic" mapTo="Calculator.Logic.CalculateApi, Calculator.Logic"/> </types> </container> </containers> The Base class using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Transactions; using Microsoft.Practices.Unity; using Calculator.Logic; namespace Calculator.Logic { public class BaseCalculatorServer : IDisposable { public BaseCalculatorServer(){} public CalculateDelegate Calculate { get; set; } public CalculationHistoryDelegate CalculationHistory { get; set; } /// <summary> /// Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources. /// </summary> public void Dispose() { this.Dispose(); } } } The Implementation using System; using System.Collections.Generic; using System.Linq; using System.Text; using Calculator.Logic; using System.ServiceModel; using System.ServiceModel.Configuration; using Microsoft.Practices.Unity; namespace Calculator.Logic { public class CalculateApi:BaseCalculatorServer { public CalculateApi(ServiceHost host) { host.Open(); Console.WriteLine("Press Enter To Exit"); Console.ReadLine(); host.Close(); } public CalculateDelegate Calculate { get; set; } public CalculationHistoryDelegate CalculationHistory { get; set; } } } Yes both base class and implementation are in the same Namespace and thats something design wise that will change once I get this working. Oh and a more detailed error Resolution of the dependency failed, type = "Calculator.Logic.BaseCalculatorServer", name = "". Exception message is: The current build operation (build key Build Key[Calculator.Logic.BaseCalculatorServer, null]) failed: The value for the property "Calculate" could not be resolved. (Strategy type BuildPlanStrategy, index 3)

    Read the article

  • SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

    - by Pinal Dave
    Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words. You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014. Action Item Here are the blog posts I have previously written. You can read it over here: Simple Demo of New Cardinality Estimation Features of SQL Server 2014 Pluralsight Course You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Video

    Read the article

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Rails - Logic for finding info from a :has_many :through needed!

    - by Jty.tan
    I have 3 relevant tables. User, Orders, and Viewables The idea is that each User has got many Orders, but at the same time, each User can View specific other Orders that belong to other Users. So Viewables has the attributes of user_id and order_id. Orders has a :has_many :Users, :through => :viewables Is it possible to do a find through an Order's view? So something like @viewable_orders = Orders.find(:all, :conditions = ["Viewable.user_id=?",1]) To get a list of Orders which are viewable by user_id=1. (This doesn't work, else I won't be asking. :( ) The idea being that I can do something like a sidebar where the current user (the logged-in one) can view a list of other people's orders that he can view. For example Three other Users who have some Orders that he can view should be eventually displayed like this: Jack (2) Basic Order (registry_id: 1) New Order (registry_id: 29) Amy (4) Short Order (registry_id: 12) Jill (5) Hardware Order (14) Pink Order (17) Software Order (76) (The number in brackets are the respective user_id or registry_id) So to find the list of all of the orders that the current user can find (assuming user_id of the current user is 1), would be found by doing @viewable_orders = Viewable.find(:all, :conditions => ["user_id=?", 1]) And that would give me the collection of the above 6 registries. Now, the easiest way to do this, is for me to just have a list of + Jill's Hardware Order + Jill's Pink Order + Amy's Short Order + etc But that gets ugly for long lists. Thanks!

    Read the article

  • MySQL vs. SQL Server Go daddy, What is the difference bewteen hosted DB and App_Data Db.

    - by Nate Gates
    I'm using Goddady for site hosting, and I'm currently using MySQL, because there are less limits on size,etc. My question is what is the difference between using a hosted Godaddy Db such as MySQL vs. creating a SQL Serverdatabase in the the App_Data folder? My guess is security? Would it be a bad idea to use a SQL ServerDB thats located in the App_Data folder? Additional Well I am able to create a .mdf (SQL Server DB file) in the App_Data folder, but I'm really unsure if should use that or not, If I did use it it would simplify using some of the Microsoft tools. Like I said my guess is that it would be less secure, but I don't really know. I know I have a 10gb, file system limit, so I'm assuming my db would have to share that space.

    Read the article

  • Can someone help me refactor this C# linq business logic for efficiency?

    - by Russell
    I feel like this is not a very efficient way of using linq. I was hoping somebody on here would have a suggestion for a refactor. I realize this code is not very pretty, as I was in a complete rush. public class Workflow { public void AssignForms() { using (var cntx = new ProjectBusiness.Providers.ProjectDataContext()) { var emplist = (from e in cntx.vw_EmployeeTaskLists where e.OwnerEmployeeID == null select e).ToList(); foreach (var emp in emplist) { // if employee has a form assigned: break; if (emp.GRADE > 15 || (emp.Pay_Plan.ToLower().Contains("al") || emp.Pay_Plan.ToLower().Contains("ex"))) { //Assign278(); } else if ((emp.Series.Contains("0905") || emp.Series.Contains("0511") || emp.Series.Contains("0110") || emp.Series.Contains("1801")) || (emp.GRADE >= 12 && emp.GRADE <= 15)) { var emptask = new ProjectBusiness.Providers.EmployeeTask(); emptask.TimespanID = cntx.Timespans.SingleOrDefault(t => t.BeginDate.Year == DateTime.Today.Year & t.EndDate.Year == DateTime.Today.Year).TimespanID; var FormID = (from f in cntx.Forms where f.FormName.Contains("450") select f.FormID).FirstOrDefault(); var TaskStatusID = (from s in cntx.TaskStatus where s.StatusDescription.ToLower() == "not started" select s.TaskStatusID).FirstOrDefault(); Assign450((int)emp.EmployeeID, FormID, TaskStatusID, emptask); cntx.EmployeeTasks.InsertOnSubmit(emptask); } else { //Assign185(); } } cntx.SubmitChanges(); } } private void Assign450(int EmployeeID, int FormID, int TaskStatusID, ProjectBusiness.Providers.EmployeeTask emptask) { emptask.FormID = FormID; emptask.OwnerEmployeeID = EmployeeID; emptask.AssignedToEmployeeID = EmployeeID; emptask.TaskStatusID = TaskStatusID; emptask.DueDate = DateTime.Today; } }

    Read the article

  • Why is my logic not working correctly for SPOJ TOPOSORT?

    - by Kavish Dwivedi
    The given problem is http://www.spoj.com/problems/TOPOSORT/ The output format is particularly important as : Print "Sandro fails." if Sandro cannot complete all his duties on the list. If there is a solution print the correct ordering, the jobs to be done separated by a whitespace. If there are multiple solutions print the one, whose first number is smallest, if there are still multiple solutions, print the one whose second number is smallest, and so on. What I am doing is simply doing dfs by reversing the edges i.e if job A finishes before job B, there is a directed edge from B to A . I am maintaining the order by sorting the adjacency list I created and storing the nodes which don't have any constraints separately so as to print them later in correct order . There are two flag arrays used , one for marking discovered node and one for marking the node whose all neighbors have been explored. Now my solution is http://www.ideone.com/QCUmKY (the important function is the visit funtion ) and its giving WA after running correct for 10 cases so its really hard to figure out where am I doing it wrong since it runs for all of the test cases which I have done by hand.

    Read the article

  • Should adapters or wrappers be unit tested?

    - by m3th0dman
    Suppose that I have a class that implements some logic: public MyLogicImpl implements MyLogic { public void myLogicMethod() { //my logic here } } and somewhere else a test class: public MyLogicImplTest { @Test public void testMyLogicMethod() { /test my logic } } I also have: @WebService public MyWebServices class { @Inject private MyLogic myLogic; @WebMethod public void myLogicWebMethod() { myLogic.myLogicMethod(); } } Should there be a test unit for myLogicWebMethod or should the testing for it be handled in integration testing.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >