Search Results

Search found 10055 results on 403 pages for 'shift company'.

Page 159/403 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Using Generics to return a literal string or from Dictionary<string, object>

    - by Mike
    I think I outsmarted myself this time. Feel free to edit the title also I could not think of a good one. I am reading from a file and then in that file will be a string because its like an xml file. But in the file will be a literal value or a "command" to get the value from the workContainer so <Email>[email protected]</Email> or <Email>[? MyEmail ?]</Email> What I wanted to do instead of writing ifs all over the place to put it in a generic function so logic is If Container command grab from container else grab string and convert to desired type Its up to the user to ensure the file is ok and the type is correct so another example is so <Answer>3</Answer> or <Answer>[? NumberOfSales ?]</Answer> This is the procedure I started to work on public class WorkContainer:Dictionary<string, object> { public T GetKeyValue<T>(string Parameter) { if (Parameter.StartsWith("[? ")) { string key = Parameter.Replace("[? ", "").Replace(" ?]", ""); if (this.ContainsKey(key)) { return (T)this[key]; } else { // may throw error for value types return default(T); } } else { // Does not Compile if (typeof(T) is string) { return Parameter } // OR return (T)Parameter } } } The Call would be mail.To = container.GetKeyValue<string>("[email protected]"); or mail.To = container.GetKeyValue<string>("[? MyEmail ?]"); int answer = container.GetKeyValue<int>("3"); or answer = container.GetKeyValue<int>("[? NumberOfSales ?]"); But it does not compile?

    Read the article

  • IE7 Debug Issue - Word Breaks in <li>

    - by JCHASE11
    Hello, I am having a very basic problem in IE7 that I cannot seem to fix. If you look at this page: http://vitaminjdesign.com/IAM/company/ you will notice that the vertical nav (ul id="leftcol") is displaying incorrectly in IE7. Each word appears on its own line. Here is my HTML /. CSS: <ul id="leftcol"> <li><a class="active" href="#">Company Overview</a></li> <li><a href="#">Why Choose Parker?</a></li> <li><a href="#">Testimonials</a></li> <li><a href="#">Financing Promotions</a></li> <li><a href="#">Licensing & Credentials</a></li> </ul> ul#leftcol{float:left; width:185px; position:relative; z-index:10;} ul#leftcol li{float:right; clear:right; margin-bottom:14px; list-style:none; list-style-image:none; text-align:right; line-height:1.3; } ul#leftcol li a{color:#505050; text-decoration:none; font-size:15px; float:right;} ul#leftcol li a.active,ul#leftcol li a:hover{color:#89b060;} Any ideas?

    Read the article

  • How to create anonymous objects of type IQueryable using LINQ

    - by Soham Dasgupta
    Hi, I'm working in an ASP.NET MVC project where I have created a two LinqToSQL classes. I have also created a repository class for the models and I've implemented some methods like LIST, ADD, SAVE in that class which serves the controller with data. Now in one of the repository classes I have pulled some data with LINQ joins like this. private HerculesCompanyDataContext Company = new HerculesCompanyDataContext(); private HerculesMainDataContext MasterData = new HerculesMainDataContext(); public IQueryable TRFLIST() { var info = from trfTable in Company.TRFs join exusrTable in MasterData.ex_users on trfTable.P_ID equals exusrTable.EXUSER select new { trfTable.REQ_NO, trfTable.REQ_DATE, exusrTable.USER_NAME, exusrTable.USER_LNAME, trfTable.FROM_DT, trfTable.TO_DT, trfTable.DESTN, trfTable.TRAIN, trfTable.CAR, trfTable.AIRPLANE, trfTable.TAXI, trfTable.TPURPOSE, trfTable.STAT, trfTable.ROUTING }; return info; } Now when I call this method from my controller I'm unable to get a list. What I want to know is without creating a custom data model class how can I return an object of anonymous type like IQueryable. And because this does not belong to any one data model how can refer to this list in the view.

    Read the article

  • [PHP] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • How to give user confirmation message before ActionLink based on validation

    - by RememberME
    I have the following link. On click, I'd like to check the item.primary_company field and if populated, give the user a warning and ask if they would like to continue. How can I do this? <a href="<%= Url.Action("Activate", new {id = item.company_id}) %>" class="fg=button fg-button-icon-solo ui-state-default ui-corner-all"><span class="ui-icon ui-icon-refresh"></span></a> EDIT I've changed to this, but nothing happens when clicked. Also, I don't know how to reference the item to do the check on the primary_company field. I only want to message to show if item.primary_company.HasValue. I'd also like to show item.company1.company_name in the confirm message. <a href="#" onclick="return Actionclick("<%= Url.Action("Activate", new {id = item.company_id}) %>");" class="fg=button fg-button-icon-solo ui-state-default ui-corner-all"><span class="ui-icon ui-icon-refresh"></span></a> <script type="text/javascript"> function Actionclick(url) { alert("myclick"); if ( confirm("Do you want to activate this company's primary company and all other subsidiaries?")) { location.href(url); } }; </script>

    Read the article

  • How to "upgrade" the database in real world?

    - by Tattat
    My company have develop a web application using php + mysql. The system can display a product's original price and discount price to the user. If you haven't logined, you get the original price, if you loginned , you get the discount price. It is pretty easy to understand. But my company want more features in the system, it want to display different prices base on different user. For example, user A is a golden parnter, he can get 50% off. User B is a silver parnter, only have 30 % off. But this logic is not prepare in the original system, so I need to add some attribute in the database, at least a user type in this example. Is there any recommendation on how to merge current database to my new version of database. Also, all the data should preserver, and the server should works 24/7. (within stop the database) Is it possible to do so? Also , any recommend for future maintaince advice? Thz u.

    Read the article

  • How can I [simply] consume JSON Data in a Line of Business Web Application

    - by Atomiton
    I usually use JSON with jQuery to just return a string with html. However, I want to start to use Javascript objects in my code. What's the simplest way to get started using json objects on my page? Here's a sample Ajax call ( after $(document).ready( { ... }) of course: $('#btn').click(function(event) { event.preventDefault(); var out = $('#result'); $.ajax({ url: "CustomerServices.asmx/GetCustomersByInvoiceCount", success: function(msg) { // // Iterate through the json results and spit them out to a page? // }, data: "{ 'invoiceCount' : 100 }" }); }); My WebMethod: [WebMethod(Description="Gets customers with more than n invoices")] public List<Customer> GetCustomersByInvoiceCount(int? invoiceCount) { using (dbDataContext db = new dbDataContext()) { return db.Customers.Where(c => c.InvoiceCount >= invoiceCount); } } What gets returned: {"d":[{"__type":"Customer","Account":"1116317","Name":"SOME COMPANY","Address":"UNit 1 , 392 JOHN ST. ","LastTransaction":"\/Date(1268294400000)\/","HighestBalance":13922.34},{"__type":"Customer","Account":"1116318","Name":"ANOTHER COMPANY","Address":"UNIT #345 , 392 JOHN ST. ","LastTransaction":"\/Date(1265097600000)\/","HighestBalance":549.42}]} What I'd LIKE to know, is what are people generally doing with this returned json? Do you iterate through the properties and create an html table on the fly? Is there way to "bind" JSON data using a javascript version of reflection ( something like the .Net GridView Control ) Do you throw this returned data into a Javascript Object and then do something with it? An example of what I want to achieve is to have an plain ol' html page ( on a mobile device )with a list of a Salesperson's Customers. When one of those customers are clicked, the customer id gets sent to a webservice which retrieves the customer details that are relevant to a sales person. I know the SO talent pool is quite deep so I figured you all here would be able to guide in the right direction and give me a few ideas on the best way to approach this.

    Read the article

  • Mixing stored procedures and ORM

    - by Jason
    The company I work for develops a large application which is almost entirely based on stored procedures. We use classic ASP and SQL Server and the major part of the business logic is contained inside those stored procedures. For example, (I know, this is bad...) a single stored procedure can be used for different purposes (insert, update, delete, make some calculations, ...). Most of the time, a stored procedure is used for operations on related tables, but this is not always the case. We are planning to move to ASP.NET in a near future. I have read a lot of posts on StackOverflow recommending that I move the business logic outside the database. The thing is, I have tried to convince the people who takes the decisions at our company and there is nothing I can do to change their mind. Since I want to be able to use the advantages of object-oriented programming, I want to map the tables to actual classes. So far, my solution is to use an ORM (Entity Framework 4 or nHibernate) to avoid mapping the objects manually (mostly to retrieve the data) and use some kind of Data Access Layer to call the existing stored procedures (for saving). I want your advice on this. Do you think it is a good solution? Any ideas?

    Read the article

  • How to track a projects extraneous quirks

    - by Steerpike
    Hello, It's possible that the answer to this question may just be standard bug tracking software like jira or fogbugz, but I'm kind of hoping someone out there knows a better system for what I'm describing. My most current project is requiring a lot of setup quirkiness to get into a position where I can actually start a coding section. For example: A series of convoluted internal company commands before I can insitgate an SSH. Making sure any third party classes that make external calls have internal company proxy options setup - while also making sure these setting wont be set up when installed on a production environment Making sure the proxy is set before trying to install pear packages. Other similar things, mostly involving internal IT security and getting it to work with modules and packages. Individually none of these things is a huge deal, and I've written extensive notes to myself regarding exact commands and aditions I've made, but they're currently in a general text document and it's going to be hard to remember exactly where what I need is far down the line. We also have several new staff starting soon and I' rather give them an easier time of setting up their programming environments. Like I said, they aren't 'programming quirks' exactly, but just the constant fiddling that comes about before programming starts in earnest. Any thoughts on the best way to documents these things for my own and future generations sanity?

    Read the article

  • How can I [simply] consume JSON Data to display to the page

    - by Atomiton
    I usually use JSON with jQuery to just return a string with html. However, I want to start to use Javascript objects in my code. What's the simplest way to get started using json objects on my page? Here's a sample Ajax call ( after $(document).ready( { ... }) of course: $('#btn').click(function(event) { event.preventDefault(); var out = $('#result'); $.ajax({ url: "CustomerServices.asmx/GetCustomersByInvoiceCount", success: function(msg) { // // Iterate through the json results and spit them out to a page? // }, data: "{ 'invoiceCount' : 100 }" }); }); My WebMethod: [WebMethod(Description="Gets customers with more than n invoices")] public List<Customer> GetCustomersByInvoiceCount(int? invoiceCount) { using (dbDataContext db = new dbDataContext()) { return db.Customers.Where(c => c.InvoiceCount >= invoiceCount); } } What gets returned: {"d":[{"__type":"Customer","Account":"1116317","Name":"SOME COMPANY","Address":"UNit 1 , 392 JOHN ST. ","LastTransaction":"\/Date(1268294400000)\/","HighestBalance":13922.34},{"__type":"Customer","Account":"1116318","Name":"ANOTHER COMPANY","Address":"UNIT #345 , 392 JOHN ST. ","LastTransaction":"\/Date(1265097600000)\/","HighestBalance":549.42}]} What I'd LIKE to know, is what are people generally doing with this returned json? Do you iterate through the properties and create an html table on the fly? Is there way to "bind" JSON data using a javascript version of reflection ( something like the .Net GridView Control ) Do you throw this returned data into a Javascript Object and then do something with it? An example of what I want to achieve is to have an plain ol' html page ( on a mobile device )with a list of a Salesperson's Customers. When one of those customers are clicked, the customer id gets sent to a webservice which retrieves the customer details that are relevant to a sales person. I know the SO talent pool is quite deep so I figured you all here would be able to guide in the right direction and give me a few ideas on the best way to approach this.

    Read the article

  • PHP download script for "processes running limited" hosting (eg. hostgator)

    - by Joe
    I am currently with HostGator on a shared hosting plan. I have a new website I'm trying to setup with a download.php script. The issue I am having is that, while someone is "downloading" a file through the download.php script, it counts as a "process" and my hosting plan limits the processes that can be running at the same time to 25 at present. My question is, what options do I have? a). Move to new web hosting that doesn't limit processes running. b). Change the way files are downloaded. I would like to choose option b), however it occurs to me that I need to have the file accessed through PHP in order to restrict the number of downloads and to track download statistics, as well as protecting against hotlinking. If there was a way to have the PHP script send them the file so that the process doesn't need to be running the whole time the file is being downloaded, I would eliminate the problem, however to my knowledge that isn't possible. Should I make the move to a new hosting company? I really enjoy HostGator as they have provided the best hosting experience for me thus far, except for this one issue of course, so I don't want to go on the hunt for another decent shared hosting company that doesn't limit processes running, only to find out there is another restriction or "catch" to the shared hosting deal.

    Read the article

  • How do you deal with breaking changes in a Rails migration?

    - by Adam Lassek
    Let's say I'm starting out with this model: class Location < ActiveRecord::Base attr_accessible :company_name, :location_name end Now I want to refactor one of the values into an associated model. class CreateCompanies < ActiveRecord::Migration def self.up create_table :companies do |t| t.string :name, :null => false t.timestamps end add_column :locations, :company_id, :integer, :null => false end def self.down drop_table :companies remove_column :locations, :company_id end end class Location < ActiveRecord::Base attr_accessible :location_name belongs_to :company end class Company < ActiveRecord::Base has_many :locations end This all works fine during development, since I'm doing everything a step at a time; but if I try deploying this to my staging environment, I run into trouble. The problem is that since my code has already changed to reflect the migration, it causes the environment to crash when it attempts to run the migration. Has anyone else dealt with this problem? Am I resigned to splitting my deployment up into multiple steps?

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • cisco 2911 router vs 2811 router

    - by NickToyota
    Just wondering if anyone has experience with the cisco 2911 or 2900 series routers. I understand it is newer and similar to the 2811 but more robust. The price difference is not that much more. I am trying to determine if I should go with the 29xx or 28xx series for a small-medium sized company. ISP bandwidth load balancing and fail over is required. T1 and ADSL lines already in place.

    Read the article

  • RoboCopy Log File Analysis

    - by BobJim
    Is it possible to analyse the log text file outputted from RoboCopy and extract the lines which are defined as "New Dir" and "Extra Dir"? I would like the line from the log contain all the details returned regarding this "New Dir" or "Extra Dir" The reason for completing this task is to understand how two folder structures have change over time. One version has been kept internally at the parent company, the second has been used by a consultancy. For your information I am using Windows 7.

    Read the article

  • Ubuntu + Raid on HP proliant DL 160 G6

    - by Adam Matan
    Hi, I'm, trying to install Ubuntu 8.04 Server on an HP Proliant DL160 G6. The HP hardware is certified by Ubuntu for the 9.04 version, which I can't install due to company policy. The problem is that Ubuntu would not recognize the RAID 1+0 disk configured by the BIOS. The raid creates one ~470GB disk from two 500GB physical disks. Any ideas? Adam

    Read the article

  • "Checksum failed" during Kerberos SSO

    - by Buddy Casino
    This is an error that occurs when a mod_auth_kerb protected webapp is being accessed, and I have no idea what the cause might be. Can anyone give hints as into which direction I should look? Thankful for any help! Search Subject for Kerberos V5 ACCEPT cred (HTTP/app.company[email protected], sun.security.jgss.krb5.Krb5AcceptCredential) Found key for HTTP/app.company[email protected](23) Entered Krb5Context.acceptSecContext with state=STATE_NEW >>> EType: sun.security.krb5.internal.crypto.ArcFourHmacEType Checksum failed ! 16:36:30,248 TP-Processor31 WARN [site.servlet.KerberosSessionSetupPrivilegedAction] Caught GSS Error GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed) at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:741) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:323) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:267) at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:741) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:323) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:267) at org.alfresco.web.site.servlet.KerberosSessionSetupPrivilegedAction.run(KerberosSessionSetupPrivilegedAction.java:95) at org.alfresco.web.site.servlet.KerberosSessionSetupPrivilegedAction.run(KerberosSessionSetupPrivilegedAction.java:44) at org.alfresco.web.site.servlet.KerberosSessionSetupPrivilegedAction.run(KerberosSessionSetupPrivilegedAction.java:44) at java.security.AccessController.doPrivileged(Native Method) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:337) at org.alfresco.web.site.servlet.SSOAuthenticationFilter.doKerberosLogon(SSOAuthenticationFilter.java:994) at org.alfresco.web.site.servlet.SSOAuthenticationFilter.doKerberosLogon(SSOAuthenticationFilter.java:994) at org.alfresco.web.site.servlet.SSOAuthenticationFilter.doFilter(SSOAuthenticationFilter.java:438) at org.alfresco.web.site.servlet.SSOAuthenticationFilter.doFilter(SSOAuthenticationFilter.java:438) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:774) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:896) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:662) Caused by: KrbException: Checksum failed at sun.security.krb5.internal.crypto.ArcFourHmacEType.decrypt(ArcFourHmacEType.java:85) at sun.security.krb5.internal.crypto.ArcFourHmacEType.decrypt(ArcFourHmacEType.java:77) at sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:168) at sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:268) at sun.security.krb5.KrbApReq.<init>(KrbApReq.java:134) at sun.security.jgss.krb5.InitSecContextToken.<init>(InitSecContextToken.java:79) at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:724) ... 24 more Caused by: java.security.GeneralSecurityException: Checksum failed at sun.security.krb5.internal.crypto.dk.ArcFourCrypto.decrypt(ArcFourCrypto.java:388) at sun.security.krb5.internal.crypto.ArcFourHmac.decrypt(ArcFourHmac.java:74) at sun.security.krb5.internal.crypto.ArcFourHmacEType.decrypt(ArcFourHmacEType.java:83) ... 30 more

    Read the article

  • So you're a new sysadmin...

    - by 80bower
    I've recently taken over management of a Windows 2003 Small Business server and network for a small, less than ten person company. I have some (antiquated) sysadmin experience, but I've little experience with Exchange. The documentation of the existing infrastructure leaves much to be desired, and I was wondering if there's any sort of "So you've just become sysadmin" guides that anyone could recommend.

    Read the article

  • Add custom command line to extended context menu in Windows 7

    - by 280Z28
    I have an application pinned to the task bar. 90% of the time, I run it with no additional command line options, so I can either click it (if not already open), or right click the icon and click the application name to open a new instance. I want to make it where when I right click it, there are 2 options listed: the first is the program with no command line options and the second is the one with a custom command line (that I hard code). If this is impossible, it would be tolerable to add it to the extended context menu (shift + right click the icon), but I prefer the former.

    Read the article

  • Dovecot vs Courier vs Cyrus

    - by wag2639
    What is best for a Personal/SMB mail server running on an Ubuntu Server (8.04+)? I want to setup my own mail server at home to evaluate some options for my company before I make a recommendation. Which is the most secure, efficient, and reliable? Also, which is easiest to integrate with an LDAP and Calendar solution?

    Read the article

  • Sending ctrl-backslash from Windows to Linux using Synergy

    - by jrbushell
    I'm using Synergy+ to share the keyboard and mouse between a Windows and a Linux (Red Hat) PC. The Windows box is the server, Linux is the client, and both are running version 1.3.4. My Windows box is set up for English UK keyboard. In a Linux terminal window, Ctrl+\ (backslash) sends a quit signal to the currently running program - useful to kill a Python script that's run amok, for example. When I try to do this via synergy, Ctrl+- (minus) is sent instead. This has the undesired effect of resizing my terminal window :( Backslash on its own and Shift-backslash = pipe both work fine. Any ideas what's happening?

    Read the article

  • Safari can’t establish a secure connection to the server

    - by Haris
    I am using Mac OS X 10.5.8 behind a company firewall and have proxy settings and username / password through which I can connect to internet. The internet is working as I am posting this question through it, but if I try to open Facebook or Gmail the following message appears: Safari can’t open the page “https://www.google.com/accounts/ServiceLogin?[..]” because Safari can’t establish a secure connection to the server “www.google.com” What could be wrong?

    Read the article

  • SBS domain name choice

    - by sandymac
    We are about to set up SBS 2011 at my small company < 10 users. My collaborator wants to name the SBS domain "example.local" . I'm of the opinion we should name the SBS domain "corp.example.com" and setup DNS so the "corp" record is a NS record to the SBS server's private IP. FYI: "Example.com" isn't the real domain name and while the website is hosted outside our office, email will be stored on the SBS server in our office after passing though a spam filtering smart host hosted elsewhere too.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >