Search Results

Search found 5046 results on 202 pages for 'satoru logic'.

Page 24/202 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Java: where should I put anonymous listener logic code?

    - by tulskiy
    Hi, we had a debate at work about what is the best practice for using listeners in java: whether listener logic should stay in the anonymous class, or it should be in a separate method, for example: button.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { // code here } }); or button.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { buttonPressed(); } }); private void buttonPressed() { // code here } which is the recommended way in terms of readability and maintainability? I prefer to keep the code inside the listener and only if gets too large, make it an inner class. Here I assume that the code is not duplicated anywhere else. Thank you.

    Read the article

  • C/C++ for Core Logic Development of a Web Application?

    - by Ramiz Uddin
    Can C/C++ be choice of keeping all your logic (business/domain) for web application? Why? I've two resources (cousins) having knowledge on C/C++ and me also good in C/C++, Python, HTML, CSS and JavaScript. We like to utilize our free time to work on our some good ideas we developed together. The ideas require knowledge of web application development. And I'm the only one who has it. Is there a way they developed the core in C/C++ and I do the rest of scripting for front-end development? Thanks.

    Read the article

  • C/C++ for logic (business/domain) of a web application?

    - by Ramiz Uddin
    Can C/C++ be choice of keeping all your logic (business/domain) for web application? Why? I've two resources (cousins) having knowledge on C/C++ and me also good in C/C++, Python, HTML, CSS and JavaScript. We like to utilize our free time to work on our some good ideas we developed together. The ideas require knowledge of web application development. And I'm the only one who has it. Is there a way they developed the core in C/C++ and I do the rest of scripting for front-end development? Thanks.

    Read the article

  • What is the problem with the logic in my UPDATE statement?

    - by Stefan Åstrand
    Hello, I would appreciate some help with an UPDATE statement. I want to update tblOrderHead with the content from tblCustomer where the intDocumentNo corresponds to the parameter @intDocumentNo. But when I run the my statement, the order table is only updated with the content from the first row of the customer table. What is the problem with my logic? I use Microsoft SQL Server. Thanks, Stefan UPDATE dbo.tblOrderHead SET dbo.tblOrderHead.intCustomerNo = @intCustomerNo , dbo.tblOrderHead.intPaymentCode = dbo.tblCustomer.intPaymentCode, dbo.tblOrderHead.txtDeliveryCode = dbo.tblCustomer.txtDeliveryCode, dbo.tblOrderHead.txtRegionCode = dbo.tblCustomer.txtRegionCode, dbo.tblOrderHead.txtCurrencyCode = dbo.tblCustomer.txtCurrencyCode, dbo.tblOrderHead.txtLanguageCode = dbo.tblCustomer.txtLanguageCode FROM dbo.tblOrderHead INNER JOIN dbo.tblCustomer ON dbo.tblOrderHead.intOrderNo = @intDocumentNo

    Read the article

  • What's the reason both Image and Bitmap classes don't implement a custom equality/hashcode logic?

    - by devoured elysium
    From MSDN documentation, it seems as both GetHashCode() and Equals() haven't been overriden in Bitmap. Neither have them been overriden in Image. So both classes are using the Object's version of them just compares references. I wasn't too convinced so I decided to fire up Reflector to check it out. It seems MSDN is correct in that matter. So, is there any special reason why MS guys wouldn't implement "comparison logic", at least for the Bitmap class? I find it is kinda acceptable for Image, as it is an abstract class, but not so much for the Bitmap class. I can see in a lot of situations calculating the hash code can be an expensive operation, but it'd be alright if it used some kind of lazy evaluation (storing the computed hash code integer in a variable a variable, so it wouldn't have to calculate it later again). When wanting to compare 2 bitmaps, will I have to resort to having to run all over the picture comparing each one of its pixels? Thanks

    Read the article

  • How do I transfer this logic into a LINQ statement?

    - by Geoffrey
    I can't get this bit of logic converted into a Linq statement and it is driving me nuts. I have a list of items that have a category and a createdondate field. I want to group by the category and only return items that have the max date for their category. So for example, the list contains items with categories 1 and 2. The first day (1/1) I post two items to both categories 1 and 2. The second day (1/2) I post three items to category 1. The list should return the second day postings to category 1 and the first day postings to category 2. Right now I have it grouping by the category then running through a foreach loop to compare each item in the group with the max date of the group, if the date is less than the max date it removes the item. There's got to be a way to take the loop out, but I haven't figured it out!

    Read the article

  • Ruby on Rails - where to write business logic while processing a request? (newbie)

    - by Genadinik
    I am learning Ruby on Rails. I made a simple link like this: <%= link_to "Alex Link", alexes_path(@alex) %> then I routed it in routes.rb like this: resources :alexes get "home/index" then I am a bit unclear, but I think it goes to this part of the controller: def index #@alexes = Alex.all respond_to do |format| format.html # index.html.erb format.json { render json: @alexes } end end Am I correct that it goes to this part of the controller? Then nothing much happens and it goes to the next page which is index.html.rb under views\alexes So what I am wondering is - if I needed to do some business logic, would I write that in the controller snippet? Where inside the snippet? An example would be nice to take a look. Also, I would like to connect to a MongoDb database. Would I also write that in the middle of the controller? Thanks!

    Read the article

  • Nginx logic (if cookie set, redirect here...) Is it possible?

    - by Matthew Steiner
    So, I have a pretty basic need, but I can't figure out if it's even possible, much less how to do it. I have a main page that anyone can see. Most of the rest of the application can be seen only if logged in (hence, a "set cookie"). So I was thinking, as long as they don't have a cookie set, they can just see a cached version of nginx. I can get it caching with this: proxy_cache STATIC; proxy_cache_valid 200 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; And it helps a ton. (instead of 15 requests per second it gets over 1000). Now I just need some sort of "server logic" to say only serve the cached page if they have no cookie, otherwise, load the dynamic page (which will automatically redirect them into the app). Any ideas?

    Read the article

  • Is problem solving of puzzles/logic tests a skill that can be developed with practise or only someth

    - by dotnetdev
    Programming is essentially problem solving/using a lot of logic. With solving puzzles (like the ones recruiters like MS etc ask), is this a skill that can be developed with practise or is it a skill that only someone who is gifted has (I assume the former as many people can pass these tests)? Even so, I keep thinking it is a special skill for someone gifted, not for someone with a lot of practise. I guess that with practise you are perhaps more open-minded and start to think out of the box more (solving technical problems in development may also foster this mindset perhaps). Thanks

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • How should I do this (business logic) in Sql Server? A constraint?

    - by Pure.Krome
    Hi folks, I wish to add some type of business logic constraint to a table, but not sure how / where. I have a table with the following fields. ID INTEGER IDENTITY HubId INTEGER CategoryId INTEGER IsFeatured BIT Foo NVARCHAR(200) etc. So what i wish is that you can only have one featured thingy, per articleId + hubId. eg. 1, 1, 1, 1, 'blah' -- Ok. 2, 1, 2, 1, 'more blah' -- Also Ok 3, 1, 1, 1, 'aaa' -- constraint error 4, 1, 1, 0, 'asdasdad' -- Ok. 5, 1, 1, 0, 'bbbb' -- Ok. etc. so the third row to be inserterd would fail because that hub AND category already have a featured thingy. Is this possible?

    Read the article

  • If cookie found, get data, else create cookie, is this good logic?

    - by Ryan
    I have an Action that basically adds an item to a cart, the only way the cart is known is by checking the cookie, here is the flow of logic, please let me know if you see any issue... /order/add/[id] is called via GET action checks for cookie, if no cookie found, it makes a new cart, writes the identifier to the cookie, and adds the item to the database with a relation to the cart created if cookie is found, it gets the cart identifier from the cookie, gets the cart object, adds the item to the database with a relation to the cart found so it's basically like... action add(int id){ if(cookie is there) cart = getcart(cookievalue) else cart = makecart() createcookie(cart.id) additemtocart(cart.id, id) return "success"; } Seem right? I can't really thing of another way that would make sense.

    Read the article

  • Where ORMs blur the lines between code and data, how do you decide what logic should be a stored procedure, and what should be coded?

    - by PhonicUK
    Take the following pseudocode: CreateInvoiceAndCalculate(ItemsAndQuantities, DispatchAddress, User); And say CreateInvoice does the following: Create a new entry in an Invoices table belonging to the specified User to be sent to the given DispatchAddress. Create a new entry in an InvoiceItems table for each of the items in ItemsAndQuantities, storing the Item, the Quantity, and the cost of the item as of now (by looking it up from an Items table) Calculate the total amount of the invoice (ex shipping and taxes) and store it in the new Invoice row. At a glace you wouldn't be able to tell if this was a method in my applications code, or a stored procedure in the database that is being exposed as a function by the ORM. And to some extent it doesn't really matter. Now technically none of this is business logic. You're not making any decisions - just performing a calculation and creating records. However some may argue that because you are performing a calculation that affects the business (the total amount to be invoiced) that this isn't something that should be done in a stored procedure and instead should be in code. So for this specific example - why would it be more appropriate to do one or the other? And where do you draw the line? Or does it even particular matter as long as it's sufficiently well documented?

    Read the article

  • How is the PHP extensions/modules file structure logic based?

    - by dotpointer
    I'm trying to configure/build PHP 5.3.10 on Linux/Slackware 12 but the extensions appear in the wrong directory when I run make install. In the php.ini file is the extension dir defined: /usr/lib/php/extensions Problem is that when I run "make install" the newly built extensions are copied to a subfolder in extensions directory: /usr/lib/php/extensions/no-debug-non-zts-20090626 What am I supposed to do with this... copy the files down from the no-debug-non-zts-20090626 directory into the extensions directory, create symlinks from extensions to the modules in the no-debug-non-zts-20090626 directory (which will take a lot of time) or what? (I know I can do any of them, but I want to know the correct way...)

    Read the article

  • Incorrect logic flow? function that gets coordinates for a sudoku game

    - by igor
    This function of mine keeps on failing an autograder, I am trying to figure out if there is a problem with its logic flow? Any thoughts? Basically, if the row is wrong, "invalid row" should be printed, and clearInput(); called, and return false. When y is wrong, "invalid column" printed, and clearInput(); called and return false. When both are wrong, only "invalid row" is to be printed (and still clearInput and return false. Obviously when row and y are correct, print no error and return true. My function gets through most of the test cases, but fails towards the end, I'm a little lost as to why. bool getCoords(int & x, int & y) { char row; bool noError=true; cin>>row>>y; row=toupper(row); if(row>='A' && row<='I' && isalpha(row) && y>=1 && y<=9) { x=row-'A'; y=y-1; return true; } else if(!(row>='A' && row<='I')) { cout<<"Invalid row"<<endl; noError=false; clearInput(); return false; } else { if(noError) { cout<<"Invalid column"<<endl; } clearInput(); return false; } }

    Read the article

  • What category of combinatorial problems appear on the logic games section of the LSAT?

    - by Merjit
    There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF = AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.

    Read the article

  • Is unnecessary error handling recommended in business logic? eg. Null check/Percentage limit check etc

    - by novice_at_work
    We usually put unnecessary checks in our business logic to avoid failures. Eg. 1. public ObjectABC funcABC(){ ObjectABC obj = new ObjectABC; .......... .......... //its never set to null here. .......... return obj; } ObjectABC o = funABC(); if(o!=null){ //do something } Why do we need this null check if we are sure that it will never be null? Is it a good practice or not? 2. int pplReached = funA(..,..,..); int totalPpl = funB(..,..,..); funA() just puts a few more restriction over result of funB(). Double percentage = (totalPpl==0||totalPpl<pplReached) ? 0.0 : pplReached/totalPpl; Do we again need this check? The questions is: Aren't we swallowing some fundamental issue by putting such checks? Issues which should be shown ideally, are avoided by putting these checks. What is the recommended way?

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • Stuck on the logic of creating tags for posts (like SO tags)? (PHP)

    - by ggfan
    I am stuck on how to create tags for each post on my site. I am not sure how to add the tags into database. Currently... I have 3 tables: +---------------------+ +--------------------+ +---------------------+ | Tags | | Posting | | PostingTags | +---------------------+ +--------------------+ +---------------------+ | + TagID | | + posting_id | | + posting_id | +---------------------+ +--------------------+ +---------------------+ | + TagName | | + title | | + tagid | +---------------------+ +--------------------+ +---------------------+ The Tags table is just the name of the tags(ex: 1 PHP, 2 MySQL,3 HTML) The posting (ex: 1 What is PHP?, 2 What is CSS?, 3 What is HTML?) The postingtags shows the relation between posting and tags. When users type a posting, I insert the data into the "posting" table. It automatically inserts the posting_id for each post(posting_id is a primary key). $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $query4 = "INSERT INTO posting (title) VALUES ('$title')"; mysqli_query($dbc, $query4); HOWEVER, how do I insert the tags for each post? When users are filling out the form, there is a checkbox area for all the tags available and they check off whatever tags they want. (I am not doing where users type in the tags they want just yet) This shows each tag with a checkbox. When users check off each tag, it gets stored in an array called "postingtag[]". <label class="styled">Select Tags:</label> <?php $dbc = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); $query5 = "SELECT * FROM tags ORDER BY tagname"; $data5 = mysqli_query($dbc, $query5); while ($row5 = mysqli_fetch_array($data5)) { echo '<li><input type="checkbox" name="postingtag[]" value="'.$row5['tagname'].'" ">'.$row5['tagname'].'</li>'; } ?> My question is how do I insert the tags in the array ("postingtag") into my "postingtags" table? Should I... $postingtag = $_POST["postingtag"]; foreach($postingtag as $value){ $query5 = "INSERT INTO postingtags (posting_id, tagID) VALUES (____, $value)"; mysqli_query($dbc, $query5); } 1.In this query, how do I get the posting_id value of the post? I am stuck on the logic here, so if someone can help me explain the next step, I would appreciate it! Is there an easier way to insert tags?

    Read the article

  • Autodetect/mount SDCards and run script for them on Linux

    - by Brendan
    Hey Everyone, I'm currently running SME Server, and need to have a script run upon the attachment of SD Cards to my server. The script itself works fine (it copies the contents of the cards), but the automounting and execution of the script is where I'm having issues. The I have a USB hub consisting of 10 USB ports; that shows up as: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 (The hub is the TERMINUS TECHNOLOGY INC entries) As I cannot plug SD Cards directly into the server; I use a USB to SD card attachement (10 of them) plugged into the hub to read the cards. Upon pluggig the 10 attachments (without cards) into the hub; lsusb yields the following: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 073: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 072: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 071: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 070: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 069: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 068: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 067: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 066: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 065: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 064: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 As you can see, the readers are the "Gensys Logic, Inc" entries. Plugging in an SD card to a reader doesn't affect lsusb (it reads exactly as above), however my system recognises the cards fine; as indicated by dmesg: Attached scsi generic sg11 at scsi54, channel 0, id 0, lun 0, type 0 USB Mass Storage device found at 73 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 If I manually mount sdd1 (mount /dev/sdd1 /somedirectory/) this works fine. What I'm really after is a solution that automounts each of the cards as they are inputted into the reader; and executes a script for them (this will involve copying their contents to another directory). My problem is that I don't know how to do this; I don't think udev will work as the USB devices don't change; if I could somehow get udev working with /dev/disk/by-path/ however I think this is doable (it seems to keep constant entries). ls /dev/disk returns: pci-0000:00:1d.7-usb-0:4.1.1.1:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0 pci-0000:0b:01.0-scsi-0:0:1:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0-part2 From above, we can see I have only one card plugged into the reader (pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1). Going mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 Works and places the card under /media/usbdisk/, however: mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 slot1/ doesn't work, and returns "mount: can't get address for /dev/disk/by-path/pci-0000" Any ideas and solutions would be great, I've seen the knowledge of a lot of the guys on here before so I'm hopeful someone can help me out. Thanks

    Read the article

  • Sample Browser Visual Studio Extension is localized and introduced to Japan

    - by Jialiang
    http://blogs.msdn.com/b/codefx/archive/2012/10/14/sample-browser-visual-studio-extension-is-localized-and-introduced-to-japan.aspx  ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????From: Japan MVP   "The Sample Browser is very easy to use thanks to the refined interface.  The categorized menu enables faster search. Highly acclaimed.  But it need localization. It may not be a problem for those who can understand English, but I think localizing Sample Browser into Japanese will promote its use in Japan further." This is a prominent feedback collected from the Japan MVP community since we released the last version of Sample Browser, which was only available in English.  Japan developers like the Sample Browser, but they want localized code samples, localized Sample Browser UI, and the localized search experience.  The Japan MVP lead, Satoru Kitabata, observed these needs and expectations.  He started to engage with all local developer MVPs to translate the UI elements in the Sample Browser.  Lots of MVPs signed up to participate in this work.  They had roundtables and newsletters to track the progress.  In short three weeks, every control, every tooltip, every font on every label, was beautifully tuned for Japanese.  The sample search experience was also optimized for Japan developers - they can directly type Japanese query to search for code samples.  Together with Microsoft Japan MVPs, the sample use experience is localized and improved to a new level!    The Japan MVP Lead, Satoru Kitabata, further worked with MSDN Japan site manager and Japan DPE to introduce the good news of localized Sample Browser to Japan Sample Browser  http://msdn.microsoft.com/ja-jp/jj730399 Sample Browser?????? http://msdn.microsoft.com/ja-jp/jj730398     Thanks to the joint effort and Japan MVPs’ feedback and contributions, the Sample Browser gets the chance to benefit the broader Japan developer audience.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • How do I create midi data to trigger an Ultrabeat pattern in Logic?

    - by user289581
    I've made some Ultrabeat patterns but I can't figure out how to trigger them. I make the pattern on C-1, then I go back to the arrange window and hit record and then hit C1 on my keyboard, and it doesn't play the pattern, it just plays a single note. I have Pattern Mode enabled on Ultrabeat and I'm using a one-shot trigger but I can't seem to trigger my pattern to play at all. What am I doing wrong?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >