Search Results

Search found 12481 results on 500 pages for 'date picker'.

Page 73/500 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Merging sql queries to get different results by date

    - by pedalpete
    I am trying to build a 'recent events' feed and can't seem to get either my query correct, or figure out how to possible merge the results from two queries to sort them by date. One table holds games/, and another table holds the actions of these games/. I am trying to get the recent events to show users 1) the actions taken on games that are publicly visible (published) 2) when a new game is created and published. So, my actions table has actionId, gameid, userid, actiontype, lastupdate My games table has gameid, startDate, createdby, published, lastupdate I currently have a query like this (simplified for easy understanding I hope). SELECT actionId, actions.gameid, userid, actiontype, actions.lastupdate FROM actions JOIN ( SELECT games.gameid, startDate, createdby, published, games.lastupdate FROM games WHERE published=1 AND lastupdate>today-2 ) publishedGames on actions.gameid=games.gameid WHERE actions.type IN (0,4,5,6,7) AND actions.lastupdate>games.lastupdate and published=1 OR games.lastupdate>today-2 AND published=1 This query is looking for actions from published games where the action took place after the game was published. That pretty much takes care of the first thing that needs to be shown. However, I also need to get the results of the SELECT games.gameid, startDate, createdby, published, games.lastupdate FROM games WHERE published=1 AND startDate>today-2 so I can include in the actions list, when a new game has been published. When I run the query as I've got it written, I get all the actionids, and their gameids, but I don't get a row which shows the gameid when it was published. I understand that it may be possible that I need to run two seperate queries, and then somehow merge the results afterword with php, but I'm completely lost on where to start with that as well.

    Read the article

  • How to sort a date array in PHP

    - by Click Upvote
    I have an array in this format: Array ( [0] => Array ( [28th February, 2009] => 'bla' ) [1] => Array ( [19th March, 2009] => 'bla' ) [2] => Array ( [5th April, 2009] => 'bla' ) [3] => Array ( [19th April, 2009] => 'bla' ) [4] => Array ( [2nd May, 2009] => 'bla' ) ) I want to sort them out in the ascending order of the dates (based on the month, day, and year). What's the best way to do that? Originally the emails are being fetched in the MySQL date format, so its possible for me to get the array in this state: Array [ ['2008-02-28']='some text', ['2008-03-06']='some text' ] Perhaps when its in this format, I can loop through them, remove all the '-' (hyphen) marks so they are left as integars, sort them using array_sort() and loop through them yet again to sort them? Would prefer if there was another way as I'd be doing 3 loops with this per user. Thanks. Edit: I could also do this: $array[$index]=array('human'=>'28 Feb, 2009', 'db'=>'20080228', 'description'=>'Some text here'); But using this, would there be any way to sort the array based on the 'db' element alone? Edit 2: Updated initial var_dump

    Read the article

  • bash: listing files in date order, with spaces in filenames

    - by Jason Judge
    I am starting with a file containing a list of hundreds of files (full paths) in a random order. I would like to list the details of the ten latest files in that list. This is my naive attempt: ls -las -t `cat list-of-files.txt` | head -10 That works, so long as none of the files have spaces in, but fails if they do as those files are split up at the spaces and treated as separate files. I have tried quoting the files in the original list-of-files file, but the here-document still splits the files up at the spaces in the filenames. The only way I can think of doing this, is to ls each file individually (using xargs perhaps) and create an intermediate file with the file listings and the date in a sortable order as the first field in each line, then sort that intermediate file. However, that feels a bit cumbersome and inefficient (hundreds of ls commands rather than one or two). But that may be the only way to do it? Is there any way to pass "ls" a list of files to process, where those files could contain spaces - it seems like it should be simple, but I'm stumped.

    Read the article

  • Sort by date in XSL

    - by bethhilson
    I am trying to sort by date for XML output. Here is my XSL: http://www.dnncreative.com -- <!-- Test to limit number of items displayed. Here only 5 items will be transformed --> <br></br> <!-- to open links in a new window, change target="_main" to target="_blank" --> <strong><a href="{link}" target="_blank"><xsl:value-of select="title"/></a></strong> <br> <!-- <xsl:value-of select="pubDate"/> --> </br> <!-- only display 100 characters of the description, and allow html --> <xsl:value-of disable-output-escaping="yes" select="description"/> I am trying to sort descending using the entereddate in my XML: Media Director 4/2/2009 01646359 Cleveland OH United States of America $0.00 - $0.00 / $0.00/hr - $0.00/hr http://employment.topechelon.com/web77391/jobseeker/sSetup.asp?runsearch=1&spJobAdId=01646359 http://employment.topechelon.com/web77391/jobseeker/sSetup.asp?runsearch=1&spJobAdId=01646359 Any help would be appreciated! Thanks Beth Hilson

    Read the article

  • How to sort an XML file by date in XLST

    - by AdRock
    I am trying to sort by date and get an error message about the stylesheet can't be loaded I found an answer on how others have suggested but it doesn't work for me Here is where it is supposed to sort. The commented out line is where the sort should occur <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template name="hoo" match="/"> <html> <head> <title>Registered Festival Organisers and Festivals</title> <link rel="stylesheet" type="text/css" href="userfestival.css" /> </head> <body> <h1>Registered Festival Organisers and Festivals</h1> <xsl:for-each select="folktask/member"> <xsl:if test="user/account/userlevel='3'"> <!--<xsl:sort select="concat(substring(festival/event/datefrom,1,4),substring(festival/event/datefrom, 6,2),substring(festival/event/datefrom, 9,2))" data-type="number" order="ascending"/>--> Sample node from XML <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX1 9BE</eventpostcode> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> </festival>

    Read the article

  • Django aggregation on a date range

    - by klaut
    Hi all, I have been lurking and learning in here for a while. Now i have a problem that somehow i cannot see an easy solution. In order to learn django i am bulding an app that basically keeps track of booked items. What I would like to do is to show how many days per month for a selected year one item has been booked. i have the following models: Asset(Model) BookedAsset(Model): asset = models.ForeignKey(Asset) startdate = models.DateField() enddate = models.DateField() So having the following entries: asset 1, 2010-02-11, 2010-02-13 asset 2, 2010-03-12, 2010-03-14 asset 1, 2010-04-30, 2010-05-01 I would like to get returned the following: asset 1 asset 2 ------- ------- Jan = 0 Jan = 0 Feb = 2 Feb = 0 Mar = 0 Mar = 2 Apr = 1 Apr = 0 May = 1 May = 0 Jun = 0 Jun = 0 Jul = 0 Jul = 0 Aug = 0 Aug = 0 Sep = 0 Sep = 0 Oct = 0 Oct = 0 Nov = 0 Nov = 0 Dec = 0 Dec = 0 I know i need to first get the number of days in a date range (and keep track if they fall out of the current month and into the next month) and then do an agregate on the number of days. I am just stuck on how to do it elegantly in Django. Any help (or hint in the right direction) is greatly appreciated.

    Read the article

  • Aggregating a list of dates to start and end date

    - by Joe Mako
    I have a list of dates and IDs, and I would like to roll them up into periods of consucitutive dates, within each ID. For a table with the columns "testid" and "pulldate" in a table called "data": | A79 | 2010-06-02 | | A79 | 2010-06-03 | | A79 | 2010-06-04 | | B72 | 2010-04-22 | | B72 | 2010-06-03 | | B72 | 2010-06-04 | | C94 | 2010-04-09 | | C94 | 2010-04-10 | | C94 | 2010-04-11 | | C94 | 2010-04-12 | | C94 | 2010-04-13 | | C94 | 2010-04-14 | | C94 | 2010-06-02 | | C94 | 2010-06-03 | | C94 | 2010-06-04 | I want to generate a table with the columns "testid", "group", "start_date", "end_date": | A79 | 1 | 2010-06-02 | 2010-06-04 | | B72 | 2 | 2010-04-22 | 2010-04-22 | | B72 | 3 | 2010-06-03 | 2010-06-04 | | C94 | 4 | 2010-04-09 | 2010-04-14 | | C94 | 5 | 2010-06-02 | 2010-06-04 | This is the the code I came up with: SELECT t2.testid, t2.group, MIN(t2.pulldate) AS start_date, MAX(t2.pulldate) AS end_date FROM(SELECT t1.pulldate, t1.testid, SUM(t1.check) OVER (ORDER BY t1.testid,t1.pulldate) AS group FROM(SELECT data.pulldate, data.testid, CASE WHEN data.testid=LAG(data.testid,1) OVER (ORDER BY data.testid,data.pulldate) AND data.pulldate=date (LAG(data.pulldate,1) OVER (PARTITION BY data.testid ORDER BY data.pulldate)) + integer '1' THEN 0 ELSE 1 END AS check FROM data ORDER BY data.testid, data.pulldate) AS t1) AS t2 GROUP BY t2.testid,t2.group ORDER BY t2.group; I use the use the LAG windowing function to compare each row to the previous, putting a 1 if I need to increment to start a new group, I then do a running sum of that column, and then aggregate to the combinations of "group" and "testid". Is there a better way to accomplish my goal, or does this operation have a name? I am using PostgreSQL 8.4

    Read the article

  • Sort string array by analysing date details in those strings

    - by Jason Evans
    I have a requirement for the project I'm working on right now which is proving a bit tricky for me. Basically I have to sort an array of items based on the Text property of those items: Here are my items: var answers = [], answer1 = { Id: 1, Text: '3-4 weeks ago' }, answer2 = { Id: 2, Text: '1-2 weeks ago' }, answer3 = { Id: 3, Text: '7-8 weeks ago' }, answer4 = { Id: 4, Text: '5-6 weeks ago' }, answer5 = { Id: 5, Text: '1-2 days ago' }, answer6 = { Id: 6, Text: 'More than 1 month ago' }; answers.push(answer1); answers.push(answer2); answers.push(answer3); answers.push(answer4); answers.push(answer5); answers.push(answer6); I need to analyse the Text property of each item so that, after the sorting, the array looks like this: answers[0] = { Id: 6, Text: 'More than 1 month ago' } answers[1] = { Id: 3, Text: '7-8 weeks ago' } answers[2] = { Id: 4, Text: '5-6 weeks ago' } answers[3] = { Id: 1, Text: '3-4 weeks ago' } answers[4] = { Id: 2, Text: '1-2 weeks ago' } answers[5] = { Id: 5, Text: '1-2 days ago' } The logic is that, the furthest away the date, the more high priority it is, so it should appear first in the array. So "1-2 days" is less of a priority then "7-8 weeks". So the logic is that, I need to extract the number values, and then the units (e.g. days, weeks) and somehow sort the array based on those details. Quite honestly I'm finding it very difficult to come up with a solution, and I'd appreciate any help.

    Read the article

  • make target is never determined up to date

    - by Michael
    Cygwin make always processing $(chrome_jar_file) target, after first successful build. So I never get up to date message and always see commands for $(chrome_jar_file) are executing. However it happens only on Windows 7. On Windows XP once it built and intact, no more builds. I narrowed down the issue to one prerequisite - $(jar_target_dir). Here is part of the code # The location where the JAR file will be created. jar_target_dir := $(build_dir)/chrome # The main chrome JAR file. chrome_jar_file := $(jar_target_dir)/$(extension_name).jar # The root of the JAR sources. jar_source_root := chrome # The sources for the JAR file. jar_sources := bla #... some files, doesn't matter jar_sources_no_dir := $(subst $(jar_source_root)/,,$(jar_sources)) $(chrome_jar_file): $(jar_sources) $(jar_target_dir) @echo "Creating chrome JAR file." @cd $(jar_source_root); $(ZIP) ../$(chrome_jar_file) $(jar_sources_no_dir) @echo "Creating chrome JAR file. Done!" $(jar_target_dir): $(build_dir) echo "Creating jar target dir..." if [ ! -x $(jar_target_dir) ]; \ then \ mkdir $(jar_target_dir); \ fi $(build_dir): @if [ ! -x $(build_dir) ]; \ then \ mkdir $(build_dir); \ fi so if I just remove $(jar_target_dir) from $(chrome_jar_file) rule, it works fine.

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • How can I make the date/time applet display on a single line?

    - by EmmyS
    I just updated from Lucid to Natty (thought it was going to be Maverick, but my About Ubuntu menu shows that it is Natty, which "was released in April 2011" - who knew the developers had mastered time travel?!) In any case, the default date/time applet in my gnome panel is now displaying on two lines (date on top of time) instead of one line like it used to. Any way to get it back on one line? I've tried the instructions shown here, but it doesn't seem to make a difference.

    Read the article

  • How do I use UIImagePickerController just to display the camera and not take a picture?

    - by Thomas
    Hello All: I'd like to know how to open the camera inside of a pre-defined frame (not the entire screen). When the view loads, I have a box, and inside it, I want to display what the camera sees. I don't want to snap a picture, just basically use the camera as a viewfinder. I have searched this site and have not yet found what I'm looking for. Please help. Thanks! Thomas Update 1: Here is what I have tried so far. 1.) I added UIImageView to my xib. 2.) Connect the following outlet to the UIImageView in IB IBOutlet UIImageView *cameraWindow; 3.) I put the following code in viewWillAppear -(void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:picker animated:YES]; NSLog(@"viewWillAppear ran"); } But this method does not run, as evident by the absence of NSLog statement from my console. Please help! Thanks, Thomas Update 2: OK I got it to run by putting the code in viewDidLoad but my camera still doesn't show up...any suggestions? Anyone....? I've been reading the UIImagePickerController class reference, but am kinda unsure how to make sense of it. I'm still learning iPhone, so it's a bit of a struggle. Please help! - (void)viewDidLoad { [super viewDidLoad]; // Create a bool variable "camera" and call isSourceTypeAvailable to see if camera exists on device BOOL camera = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]; // If there is a camera, then display the world throught the viewfinder if(camera) { UIImagePickerController *picker = [[UIImagePickerController alloc] init]; // Since I'm not actually taking a picture, is a delegate function necessary? picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:picker animated:YES]; NSLog(@"Camera is available"); } // Otherwise, do nothing. else NSLog(@"No camera available"); } Thanks! Thomas Update 3: A-HA! Found this on the Apple Class Reference. Discussion The delegate receives notifications when the user picks an image or movie, or exits the picker interface. The delegate also decides when to dismiss the picker interface, so you must provide a delegate to use a picker. If this property is nil, the picker is dismissed immediately if you try to show it. Gonna play around with the delegate now. Then I'm going to read on wtf a delegate is. Backwards? Whatever :-p Update 4: The two delegate functions for the class are – imagePickerController:didFinishPickingMediaWithInfo: – imagePickerControllerDidCancel: and since I don't actually want to pick an image or give the user the option to cancel, I am just defining the methods. They should never run though....I think.

    Read the article

  • PHP 'Years' array

    - by J M 4
    I am trying to create an array for years which i will use in the DOB year piece of a form I am building. Currently, I know there are two ways to handle the issue but I don't really care for either: 1) Range: I know I can create a year array using the following <?php $year = range(1910,date("Y")); $_SESSION['years_arr'] = $year; ?> the problem with Point 1 is two fold: a) my function call shows the first year as 'selected' instead of "Year" as I have as option="0", and b) I want the years reversed so 2010 is the first in the least and shown decreasing. My function call is: PHP <?php function showOptionsDrop($array, $active, $echo=true){ $string = ''; foreach($array as $k => $v){ $s = ($active == $k)? ' selected="selected"' : ''; $string .= '<option value="'.$k.'"'.$s.'>'.$v.'</option>'."\n"; } if($echo) echo $string; else return $string; } ?> HTML <table> <tr> <td>State:</td> <td><select name="F1State"><option value="0">Choose a year</option><?php showOptionsDrop($_SESSION['years_arr'], null, true); ?></select> </td> </tr> </table> 2) Long Array I know i can physically create an array with years listed out but this takes up a lot of space and time if I ever want to go back and modify. ex: PHP $years = array('1900'=>"1900", '1901'=>"1901", '1902'=>"1902", '1903'=>"1903", '1904'=>"1904", '1905'=>"1905", '1906'=>"1906", '1907'=>"1907", '1908'=>"1908", '1909'=>"1909", '1910'=>"1910", '1911'=>"1911", '1912'=>"1912", '1913'=>"1913", '1914'=>"1914", '1915'=>"1915", '1916'=>"1916", '1917'=>"1917", '1918'=>"1918", '1919'=>"1919", '1920'=>"1920", '1921'=>"1921", '1922'=>"1922", '1923'=>"1923", '1924'=>"1924", '1925'=>"1925", '1926'=>"1926", '1927'=>"1927", '1928'=>"1928", '1929'=>"1929", '1930'=>"1930", '1931'=>"1931", '1932'=>"1932", '1933'=>"1933", '1934'=>"1934", '1935'=>"1935", '1936'=>"1936", '1937'=>"1937", '1938'=>"1938", '1939'=>"1939", '1940'=>"1940", '1941'=>"1941", '1942'=>"1942", '1943'=>"1943", '1944'=>"1944", '1945'=>"1945", '1946'=>"1946", '1947'=>"1947", '1948'=>"1948", '1949'=>"1949", '1950'=>"1950", '1951'=>"1951", '1952'=>"1952", '1953'=>"1953", '1954'=>"1954", '1955'=>"1955", '1956'=>"1956", '1957'=>"1957", '1958'=>"1958", '1959'=>"1959", '1960'=>"1960", '1961'=>"1961", '1962'=>"1962", '1963'=>"1963", '1964'=>"1964", '1965'=>"1965", '1966'=>"1966", '1967'=>"1967", '1968'=>"1968", '1969'=>"1969", '1970'=>"1970", '1971'=>"1971", '1972'=>"1972", '1973'=>"1973", '1974'=>"1974", '1975'=>"1975", '1976'=>"1976", '1977'=>"1977", '1978'=>"1978", '1979'=>"1979", '1980'=>"1980", '1981'=>"1981", '1982'=>"1982", '1983'=>"1983", '1984'=>"1984", '1985'=>"1985", '1986'=>"1986", '1987'=>"1987", '1988'=>"1988", '1989'=>"1989", '1990'=>"1990", '1991'=>"1991", '1992'=>"1992", '1993'=>"1993", '1994'=>"1994", '1995'=>"1995", '1996'=>"1996", '1997'=>"1997", '1998'=>"1998", '1999'=>"1999", '2000'=>"2000", '2001'=>"2001", '2002'=>"2002", '2003'=>"2003", '2004'=>"2004", '2005'=>"2005", '2006'=>"2006", '2007'=>"2007", '2008'=>"2008", '2009'=>"2009", '2010'=>"2010"); $_SESSION['years_arr'] = $years_arr; Does anybody have a recommended idea how to work - or just how to simply modify my existing code? Thank you!

    Read the article

  • How to insert a date to an Open XML worksheet?

    - by Manuel
    I'm using Microsoft Open XML SDK 2 and I'm having a really hard time inserting a date into a cell. I can insert numbers without a problem by setting Cell.DataType = CellValues.Number, but when I do the same with a date (Cell.DataType = CellValues.Date) Excel 2010 crashes (2007 too). I tried setting the Cell.Text value to many date formats as well as Excel's date/numeric format to no avail. I also tried to use styles, removing the type attribute, plus many other pizzas I threw at the wall... Can anyone point me to an example inserting a date to a worksheet? Thanks,

    Read the article

  • Reusable calendar popup like iCalendar/scheduler on iPad? Not a date picker wheel.

    - by MikeN
    I need a date picker that shows an actual calendar (similar to the built-in calendar feature on the iPad.) Is there a reusuable widget? How could I easily implement one? The date picker wheel is terrible because it doesn't show you the day of the week (Sun, Mon, Tue...) and isn't really needed on the iPad where you have enough room to show a calendar which is a very user friendly way of letting someone pick a date (it is mentally easy to find the third Tuesday from today.)

    Read the article

  • How to make Date locale-independent? (GMT timezone newbie question)

    - by folone
    I have a db, that stores dates in OleDateTime format, in GMT timezone. I've implemented a class, extending Date in java to represent that in classic date format. But my class is locale-dependent (I'm in GMT+2). Therefore, it converts the date in the db as date - 2 hours. How do I make it convert the date correctly? I want my class to be locale-independent, always using GMT timezone. Actually, the question is: class MyOleDateTime extends Date { static { Locale.setDefault(WhatGoesHere?) } // ... some constructors // ... some methods }

    Read the article

  • how to use monthNames in jqgrid when validating date?

    - by Sasha
    Hi all. In my jqgrid when i am clicking on add new record i have date field prepopulated with current date. Format of the date is yyyy-MMM-d (e.g. 2010-Jan-23). Date is required field and when i click submit button it fails validation and displays error that this date is invalid, and it wants Y-m-d format. How can i check my value with jqgrid? In other words how to make jqgrid accept the following date format when validating 2010-Jan-23? Thanks.

    Read the article

  • Calculating next date in Turbo Pascal

    - by Chaima Chaimouta
    program date; uses wincrt; var m,ch,ch1,ch2,ch3: string ; mois,j,a,b: integer ; begin write('a');read(a); write('j');read(j); write('mois');read(mois); case mois of 1,3,5,7,8,10: if j<31 then begin b:=j+1; m:=str(b,ch)+'/'+str(mois,ch2)+'/'+str(a,ch3); else if j=31then b:=1; s:=mois+1; m:=concat(str(b,ch),'/',str(s,ch2),'/',str(a,ch3)); end else m:='erreur'; 4,6,9,11:if j<30 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2),'/',str(a,ch3)); end else j=30 then begin b:=1; s:=mois+1; m:=concat(str(b,ch),'/',str(mois,ch2),'/',str(a,ch3)); end else m:='erreur'; 2:if j<28 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2),'/',str(a,ch3)); end else if j=28 then begin b:=1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else if((a mod 4=0)AND (a mod 100<>0)) or ((a mod 100=0)and(a mod 400=0)) then if j<29 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else if j=29 then begin b:=1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else m:='erreur'; 12:if j<31 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else if j=31 then begin b:=1; s:=a+1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(s,ch3)); end; writeln(m); end. this is my program i hope you be able to help me

    Read the article

  • how to format date when i load data from google-app-engine..

    - by zjm1126
    i use remote_api to load data from google-app-engine. appcfg.py download_data --config_file=helloworld/GreetingLoad.py --filename=a.csv --kind=Greeting helloworld the setting is: class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', str, None), ]) exporters = [AlbumExporter] and i download a.csv is : the date is not readable , and the date in appspot.com admin is : so how to get the full date ?? thanks i change this : class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date(), None), ]) exporters = [AlbumExporter] but the error is :

    Read the article

  • How to accept localized date format (e.g dd/mm/yy) in a DateField on an admin form ?

    - by tomjerry
    Is it possible to customize a django application to have accept localized date format (e.g dd/mm/yy) in a DateField on an admin form ? I have a model class : class MyModel(models.Model): date = models.DateField("Date") And associated admin class class MyModelAdmin(admin.ModelAdmin): pass On django administration interface, I would like to be able to input a date in following format : dd/mm/yyyy. However, the date field in the admin form expects yyyy-mm-dd. How can I customize things ? Nota bene : I have already specified my custom language code (fr-FR) in settings.py, but it seems to have no effect on this date input matter. Thanks in advance for your answer

    Read the article

  • Is it possible to convert Gregorian to Hijri date in Vb ?

    - by ahmed
    Hi, I have a table in sql where the date format is stored in Hijri. Now I am working on a vb.net application where I have to let the user update that dateField. So is it possible that if I place a datepicker(which is in Gregorian) and user selects the date and its converts into Hijri date before updating. I mean when the user selects the date and clicks the save button the date should be updated in hijri format in the sql . For now , the user is entering the date manually on a tms AdvEdit. Is there any code available to accomplish this task. Thanking you all in advance for your time and consideration.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >