Search Results

Search found 23226 results on 930 pages for 'date format'.

Page 120/930 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • make target is never determined up to date

    - by Michael
    Cygwin make always processing $(chrome_jar_file) target, after first successful build. So I never get up to date message and always see commands for $(chrome_jar_file) are executing. However it happens only on Windows 7. On Windows XP once it built and intact, no more builds. I narrowed down the issue to one prerequisite - $(jar_target_dir). Here is part of the code # The location where the JAR file will be created. jar_target_dir := $(build_dir)/chrome # The main chrome JAR file. chrome_jar_file := $(jar_target_dir)/$(extension_name).jar # The root of the JAR sources. jar_source_root := chrome # The sources for the JAR file. jar_sources := bla #... some files, doesn't matter jar_sources_no_dir := $(subst $(jar_source_root)/,,$(jar_sources)) $(chrome_jar_file): $(jar_sources) $(jar_target_dir) @echo "Creating chrome JAR file." @cd $(jar_source_root); $(ZIP) ../$(chrome_jar_file) $(jar_sources_no_dir) @echo "Creating chrome JAR file. Done!" $(jar_target_dir): $(build_dir) echo "Creating jar target dir..." if [ ! -x $(jar_target_dir) ]; \ then \ mkdir $(jar_target_dir); \ fi $(build_dir): @if [ ! -x $(build_dir) ]; \ then \ mkdir $(build_dir); \ fi so if I just remove $(jar_target_dir) from $(chrome_jar_file) rule, it works fine.

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • How to separate date in php

    - by user225269
    I want to be able to separate the birthday from the mysql data into day, year and month. Using the 3 textbox in html. How do I separate it? I'm trying to think of what can I do with the code below to show the result that I want: Here's the html form with the php code: $idnum = mysql_real_escape_string($_POST['idnum']); mysql_select_db("school", $con); $result = mysql_query("SELECT * FROM student WHERE IDNO='$idnum'"); $month = mysql_real_escape_string($_POST['mm']); ?> <?php while ( $row = mysql_fetch_array($result) ) { ?> <tr> <td width="30" height="35"><font size="2">Month:</td> <td width="30"><input name="mo" type="text" id="mo" onkeypress="return handleEnter(this, event)" value="<?php echo $month = explode("-",$row['BIRTHDAY']);?>"> As you can see the column is the mysql database is called BIRTHDAY. With this format: YYYY-MM-DD How do I do it. So that the data from the single column will be divided into three parts? Please help thanks,

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • cannot format a fat filesystem, getting error "Both FATs appear to be corrupt. Giving up."

    - by Nilesh
    i am trying format an FAT (or FAT32) file system on my ubuntu system, but i am not able to format the device, each time i am getting the error "Both FATs appear to be corrupt. Giving up." i have tried all options like sudo dosfsck -t -a -w /dev/sdc1 sudo dosfsck -w -r -l -a -v -t /dev/sdc1 but each time the same message comes, can any one guide me how to recover the filesystem, also i don't mind losing data of this drive as this is an external pen drive. Also, can u pl suggest of some method other then booting from a CD with software like GPARTED or something like that.

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • Le H.264 est de retour sur Chrome, qui pourra lire les vidéos dans ce format grâce à une extension de Microsoft

    Le H.264 est de retour sur Chrome, qui pourra lire les vidéos dans ce format grâce à une extension de Microsoft Mise à jour du 03.02.2011 par Katleen Il y a un petit peu moins d'un mois, Google annonçait la fin du support du codec H.264 par son navigateur Chrome, ce qui en enchantait certains et en gênait d'autres. Aujourd'hui, une alternative est possible, grâce à Microsoft (qui s'était déjà occupé du problème sous Firefox). L'éditeur de Redmond vient ainsi de dévoiler son "Windows Media Player HTML5 Extension for Chrome", qui, comme son nom l'indique, est une extension pour Chrome qui permettra la lecture de vidéos au format H.264 par les utilisateurs du navigateur d...

    Read the article

  • How can I make the date/time applet display on a single line?

    - by EmmyS
    I just updated from Lucid to Natty (thought it was going to be Maverick, but my About Ubuntu menu shows that it is Natty, which "was released in April 2011" - who knew the developers had mastered time travel?!) In any case, the default date/time applet in my gnome panel is now displaying on two lines (date on top of time) instead of one line like it used to. Any way to get it back on one line? I've tried the instructions shown here, but it doesn't seem to make a difference.

    Read the article

  • Why are UUID / GUID's in the format they are?

    - by Xeoncross
    Globally Unique Identifiers (GUID) are a grouped string with a specific format which I assume has a security reason. A GUID is most commonly written in text as a sequence of hexadecimal digits separated into five groups, such as: 3F2504E0-4F89-11D3-9A0C-0305E82C3301 Why aren't GUID/UUID strings just random bytes encoded using hexadecimal of X length? This text notation contains the following fields, separated by hyphens: | Hex digits | Description |------------------------- | 8 | Data1 | 4 | Data2 | 4 | Data3 | 4 | Initial two bytes from Data4 | 12 | Remaining six bytes from Data4 There are also several versions of the UUID standards. Version 4 UUIDs are generally internally stored as a raw array of 128 bits, and typically displayed in a format something like: uuid:xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx

    Read the article

  • What's the difference or purpose of a file format like ELF when flat binaries take up less space and can do the same thing?

    - by Sinister Clock
    I will give a better description now. In Linux driver development you need to follow a specification using an ELF file format as a finalized executable, i.e., that right there is not flat, it has headers, entry fields, and is basically carrying more weight than just a flat binary with opcodes. What is the purpose or in-depth difference of a Linux ELF file for a driver to interact with the video hardware, and, say, a bare, flat x86 16-bit binary I write that makes use of emulated graphics mode on a graphics card and writes to memory(besides the fact that the Linux driver probably is specific to making full use of the hardware and not just the emulated, backwards compatible memory accessing scheme). To sum it up, what is a difference or purpose of a binary like ELF with different headers and settings and just a flat binary with the necessary opcodes/instructions/data to do the same thing, just without any specific format? Example: Windows uses PE, Mac uses Mach-O/PEF, Linux uses ELF/FATELF, Unix uses COFF. What do any of them really mean or designate if you can just go flat, especially with a device driver which is system software.

    Read the article

  • Why can't flowcharts or mathematical equations created in Microsoft Office and saved in .docx format be opened by LibreOffice?

    - by user33831
    I am using Ubuntu 11.10 and LibreOffice that comes with it. Before this , I was a Windows user and some of my previous documents were saved in .docx format. I tried to use LibreOffice to open those .docx file and I can view all text, however I can't view the flowchart I drew and also mathematical equations. Another issue is, if I create new flowchart with LibreOffice and save it in .docx file, when I re-open that file, I can't view those flowcharts, but those flowcharts are there, occupied space. No problem for .odt format of course. Does anyone know why this happens? Thanks in advanced.

    Read the article

  • How can I format my active hard drive to NTFS?

    - by Ghost
    Believe it or not, I'm not too happy with Ubuntu. Well, let me rephrase that. I like it, but the only thing I don't like about it is that it's too much of a hassle to get a game to work. I'm trying to install Windows 7 with a 4GB flash drive, but my error that comes up is that my hard drive I'm trying to install on is in ext4. I need to format it to read NTFS. I can't seem to find any topics on how to format an active hard drive. I found a topic that explains how to move Ubuntu to a new drive, but it's a bit confusing to me. Please help! (Please don't disregard this topic just because I want to go back to windows)

    Read the article

  • Notepad++'s pesky EOL Format switching -- how to remove the invisible (default?) keyboard shortcuts Ctrl+M, Ctrl+J

    - by AKE
    Notepad++ lets the user specify whether the end of lines (EOL) format for a file should be entirely in Windows, Unix, or Mac formats. Notepad++ also remember the last one encountered in a file and uses that EOL format when a new file is created. But Notepad++ seems to have some pesky default keyboard shortcuts built-in that can create MIXED format files, creating havoc with this otherwise quite reasonable situation. Specifically: - Ctrl+M puts a Mac style EOL, i.e. (0x0D only), - Ctrl+J puts a UNIX style EOL, i.e. (0x0A only), The hazard is that rapid typing and using the keyboard heavily with other shortcut commands could inadvertantly mean typing oone of these above, each time turning at least one line in the file into another EOL format. So my Question: How can I turn OFF these apparently built-in keyboard shortcuts. Please Note: I've already scanned through Settings > Shortcut Mapper and could not find Ctrl+M or Ctrl+J listed for EOL conversion. Thanks,

    Read the article

  • PHP 'Years' array

    - by J M 4
    I am trying to create an array for years which i will use in the DOB year piece of a form I am building. Currently, I know there are two ways to handle the issue but I don't really care for either: 1) Range: I know I can create a year array using the following <?php $year = range(1910,date("Y")); $_SESSION['years_arr'] = $year; ?> the problem with Point 1 is two fold: a) my function call shows the first year as 'selected' instead of "Year" as I have as option="0", and b) I want the years reversed so 2010 is the first in the least and shown decreasing. My function call is: PHP <?php function showOptionsDrop($array, $active, $echo=true){ $string = ''; foreach($array as $k => $v){ $s = ($active == $k)? ' selected="selected"' : ''; $string .= '<option value="'.$k.'"'.$s.'>'.$v.'</option>'."\n"; } if($echo) echo $string; else return $string; } ?> HTML <table> <tr> <td>State:</td> <td><select name="F1State"><option value="0">Choose a year</option><?php showOptionsDrop($_SESSION['years_arr'], null, true); ?></select> </td> </tr> </table> 2) Long Array I know i can physically create an array with years listed out but this takes up a lot of space and time if I ever want to go back and modify. ex: PHP $years = array('1900'=>"1900", '1901'=>"1901", '1902'=>"1902", '1903'=>"1903", '1904'=>"1904", '1905'=>"1905", '1906'=>"1906", '1907'=>"1907", '1908'=>"1908", '1909'=>"1909", '1910'=>"1910", '1911'=>"1911", '1912'=>"1912", '1913'=>"1913", '1914'=>"1914", '1915'=>"1915", '1916'=>"1916", '1917'=>"1917", '1918'=>"1918", '1919'=>"1919", '1920'=>"1920", '1921'=>"1921", '1922'=>"1922", '1923'=>"1923", '1924'=>"1924", '1925'=>"1925", '1926'=>"1926", '1927'=>"1927", '1928'=>"1928", '1929'=>"1929", '1930'=>"1930", '1931'=>"1931", '1932'=>"1932", '1933'=>"1933", '1934'=>"1934", '1935'=>"1935", '1936'=>"1936", '1937'=>"1937", '1938'=>"1938", '1939'=>"1939", '1940'=>"1940", '1941'=>"1941", '1942'=>"1942", '1943'=>"1943", '1944'=>"1944", '1945'=>"1945", '1946'=>"1946", '1947'=>"1947", '1948'=>"1948", '1949'=>"1949", '1950'=>"1950", '1951'=>"1951", '1952'=>"1952", '1953'=>"1953", '1954'=>"1954", '1955'=>"1955", '1956'=>"1956", '1957'=>"1957", '1958'=>"1958", '1959'=>"1959", '1960'=>"1960", '1961'=>"1961", '1962'=>"1962", '1963'=>"1963", '1964'=>"1964", '1965'=>"1965", '1966'=>"1966", '1967'=>"1967", '1968'=>"1968", '1969'=>"1969", '1970'=>"1970", '1971'=>"1971", '1972'=>"1972", '1973'=>"1973", '1974'=>"1974", '1975'=>"1975", '1976'=>"1976", '1977'=>"1977", '1978'=>"1978", '1979'=>"1979", '1980'=>"1980", '1981'=>"1981", '1982'=>"1982", '1983'=>"1983", '1984'=>"1984", '1985'=>"1985", '1986'=>"1986", '1987'=>"1987", '1988'=>"1988", '1989'=>"1989", '1990'=>"1990", '1991'=>"1991", '1992'=>"1992", '1993'=>"1993", '1994'=>"1994", '1995'=>"1995", '1996'=>"1996", '1997'=>"1997", '1998'=>"1998", '1999'=>"1999", '2000'=>"2000", '2001'=>"2001", '2002'=>"2002", '2003'=>"2003", '2004'=>"2004", '2005'=>"2005", '2006'=>"2006", '2007'=>"2007", '2008'=>"2008", '2009'=>"2009", '2010'=>"2010"); $_SESSION['years_arr'] = $years_arr; Does anybody have a recommended idea how to work - or just how to simply modify my existing code? Thank you!

    Read the article

  • How to insert a date to an Open XML worksheet?

    - by Manuel
    I'm using Microsoft Open XML SDK 2 and I'm having a really hard time inserting a date into a cell. I can insert numbers without a problem by setting Cell.DataType = CellValues.Number, but when I do the same with a date (Cell.DataType = CellValues.Date) Excel 2010 crashes (2007 too). I tried setting the Cell.Text value to many date formats as well as Excel's date/numeric format to no avail. I also tried to use styles, removing the type attribute, plus many other pizzas I threw at the wall... Can anyone point me to an example inserting a date to a worksheet? Thanks,

    Read the article

  • Using respond_to ... format.json and jQuery Form Plugin by malsup

    - by Topher Fangio
    Hello all, I'm having a tad bit of trouble getting the jQuery Form Plugin to work properly with a file-upload field. When I use the plugin to submit the form without a file-upload field, the format.json portion of the respond_to do |format| block is called properly. However, by adding the file-upload field, it only executes the format.html portion which makes my javascript code think that an error has occurred. Has anyone run into this before or know a way to force the plugin to always use json? Alternatively, can I modify the url that the plugin uses to force Rails to render the json? Thanks very much for any help! Code below: # app/controllers/details_controller.rb def create @detail = Detail.new(params[:detail]) style = params[:detail_style].to_sym || :thumb data = { :id => '5', :url => 'test.rails' } respond_to do |format| if @detail.save flash[:notice] = 'Your image has been saved.' data = { :id => @detail.id, :url => @detail.data.url(style) } format.html { redirect_to :action => 'index' } format.json { render :json => "<textarea>#{data.to_json}</textarea>", :status => :created } else format.html { render :action => 'new' } format.json { render :json => @detail.errors, :status => :unprocessable_entity } end end end /* app/views/sidebar/_details.html.erb (excerpt) */ <% form_for(Detail.new, :html => { :multipart => true } ) do |f| %> <%= hidden_field_tag 'detail_style', 'thumb' %> <%= f.label :image, "Recent Images" %> <%= f.file_field :image%> <p> <%= f.submit "Upload" %> </p> <% end %> <script> $(document).ready(function() { var options = { dataType: 'json', success: function(json, statusText) { console.log("success: " + json); }, error: function(xhr, statusText, errorThrown) { console.log("error: " + xhr.responseText); } }; $('#new_detail').ajaxForm(options); });

    Read the article

  • How to make Date locale-independent? (GMT timezone newbie question)

    - by folone
    I have a db, that stores dates in OleDateTime format, in GMT timezone. I've implemented a class, extending Date in java to represent that in classic date format. But my class is locale-dependent (I'm in GMT+2). Therefore, it converts the date in the db as date - 2 hours. How do I make it convert the date correctly? I want my class to be locale-independent, always using GMT timezone. Actually, the question is: class MyOleDateTime extends Date { static { Locale.setDefault(WhatGoesHere?) } // ... some constructors // ... some methods }

    Read the article

  • Calculating next date in Turbo Pascal

    - by Chaima Chaimouta
    program date; uses wincrt; var m,ch,ch1,ch2,ch3: string ; mois,j,a,b: integer ; begin write('a');read(a); write('j');read(j); write('mois');read(mois); case mois of 1,3,5,7,8,10: if j<31 then begin b:=j+1; m:=str(b,ch)+'/'+str(mois,ch2)+'/'+str(a,ch3); else if j=31then b:=1; s:=mois+1; m:=concat(str(b,ch),'/',str(s,ch2),'/',str(a,ch3)); end else m:='erreur'; 4,6,9,11:if j<30 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2),'/',str(a,ch3)); end else j=30 then begin b:=1; s:=mois+1; m:=concat(str(b,ch),'/',str(mois,ch2),'/',str(a,ch3)); end else m:='erreur'; 2:if j<28 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2),'/',str(a,ch3)); end else if j=28 then begin b:=1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else if((a mod 4=0)AND (a mod 100<>0)) or ((a mod 100=0)and(a mod 400=0)) then if j<29 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else if j=29 then begin b:=1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else m:='erreur'; 12:if j<31 then begin b:=j+1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(a,ch3)); end else if j=31 then begin b:=1; s:=a+1; m:=concat(str(b,ch),'/',str(mois,ch2,'/',str(s,ch3)); end; writeln(m); end. this is my program i hope you be able to help me

    Read the article

  • how to use monthNames in jqgrid when validating date?

    - by Sasha
    Hi all. In my jqgrid when i am clicking on add new record i have date field prepopulated with current date. Format of the date is yyyy-MMM-d (e.g. 2010-Jan-23). Date is required field and when i click submit button it fails validation and displays error that this date is invalid, and it wants Y-m-d format. How can i check my value with jqgrid? In other words how to make jqgrid accept the following date format when validating 2010-Jan-23? Thanks.

    Read the article

  • Is it possible to convert Gregorian to Hijri date in Vb ?

    - by ahmed
    Hi, I have a table in sql where the date format is stored in Hijri. Now I am working on a vb.net application where I have to let the user update that dateField. So is it possible that if I place a datepicker(which is in Gregorian) and user selects the date and its converts into Hijri date before updating. I mean when the user selects the date and clicks the save button the date should be updated in hijri format in the sql . For now , the user is entering the date manually on a tms AdvEdit. Is there any code available to accomplish this task. Thanking you all in advance for your time and consideration.

    Read the article

  • jQuery: How can I work with dates in this format? mm/yyyy

    - by Enrique
    I'm doing a web application for articles Articles must be shown by month, with its published date stored as mm/yyyy Now: 1- Should I use a DATE type field for storing? 2- Will jQuery UI datePicker be useful for showing mm/yyyy? 3- How could I sort by mm/yyyy? I guess it will be more complicated if I store date normally and extract the day from date each time I want to do something, right? Thanks,

    Read the article

  • Will iPhone App with future "Availability Date" show up in "New Releases" on that day?

    - by Heiko Weible
    I plan to submit my first iPhone game "The Twiggles" to the iPhone app store today. Like many people suggest, I want to set the "Availability Date" to a date way in the future and change it later. But there are two different opinions on what to do once the app is approved: A: Some people say, I have to quickly change the "Availability Date" to the date the app is approved by Apple right after I get the "app approved" mail. Otherwise it won't show up in the "New Releases" list. B: Some people say, this is not necessary (or maybe no longer necessary). I can set the "Availability Date" to some date in the future (for example, next weekend). The app will be released on that day and will show up in the "New Releases" list on that day. Who is right?

    Read the article

  • Android Studio Could not call IncrementalTask.taskAction() on task ':project:dexDebug'

    - by akenawell85x
    I recently decided to switch from Eclipse to Android Studio. I imported a project I was working on and am now getting this error when I try to run the project. Gradle: Execution failed for task ':project:dexDebug'. > Could not call IncrementalTask.taskAction() on task ':project:dexDebug' I've been cruising this site for 2 days now and trying different suggestions to no avail. I did run gradlew compileDebug --stacktrace and this is what I got: C:\Users\adam\AndroidStudioProjects\projectProject>gradlew compileDebug --stacktrace Relying on packaging to define the extension of the main artifact has been deprecated and is scheduled to be removed in Gradle 2.0 :project:preBuild UP-TO-DATE :project:preDebugBuild UP-TO-DATE :project:preReleaseBuild UP-TO-DATE :project:prepareComAndroidSupportAppcompatV71800Library UP-TO-DATE :project:prepareComGoogleAndroidGmsPlayServices3225Library UP-TO-DATE :project:prepareDebugDependencies :project:compileDebugAidl UP-TO-DATE :project:compileDebugRenderscript UP-TO-DATE :project:generateDebugBuildConfig UP-TO-DATE :project:mergeDebugAssets UP-TO-DATE :project:mergeDebugResources UP-TO-DATE :project:processDebugManifest UP-TO-DATE :project:processDebugResources UP-TO-DATE :project:generateDebugSources UP-TO-DATE :project:compileDebug UP-TO-DATE BUILD SUCCESSFUL Total time: 10.459 secs However I am still getting that error when I try to actually run the project. Here is my build.gradle (i do have a 'libs' folder in my project with all the jars for a google maps/places app): buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.+' } } apply plugin: 'android' repositories { mavenCentral() } android { compileSdkVersion 18 buildToolsVersion "18.1.1" defaultConfig { minSdkVersion 8 targetSdkVersion 18 } } dependencies { compile fileTree(dir: 'libs') compile 'com.google.android.gms:play-services:3.2.25' compile 'com.android.support:support-v4:18.0.0' compile 'com.android.support:appcompat-v7:+' } and my settings.gradle: include ':project', ':project:libs:android-support-v4', ':project:libs:google-api-client-1.10.3-beta', ':project:libs:google-api-client-android2-1.10.3-beta', ':project:libs:google-http-client-1.10.3-beta', ':project:libs:google-http-client-android2-1.10.3-beta', ':project:libs:google-oauth-client-1.10.1-beta', ':project:libs:gson-2.1', ':project:libs:guava-11.0.1', ':project:libs:jackson-core-asl-1.9.4', ':project:libs:jsr305-1.3.9', ':project:libs:protobuf-java-2.2.0', ':project:libs:GoogleAdMobAdsSdk-6.4.1' As I said, I've tried pretty much everything I have read on here about this error and am having no luck. Any help would be greatly appreciated.

    Read the article

  • batch file to deploy files

    - by Martin Michalak
    hi I have created batch file which pulls info from *.txt file and deploy code from the source to destination: SET Source=%1 if exist %Source% ( ECHO Source for WEB exists ) else ( ECHO Wrong build%Source% doesn't exist GOTO Menu ) SET Server=%2 SET AppPool=%3 SET Destination=%4 SET Folder=%5 SET ENV=%6 SET AppName=%7 SET Envlog=%8 ECHO Deployment of WEB > %Envlog% %Date% %Time% echo. @ECHO Stopping App Pools @ECHO Stopping App Pools >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE -d \\%Server% cmd.exe /c c:\windows\system32\inetsrv\appcmd STOP apppool /apppool.name:%AppPool% echo. @ECHO App Pools will be stopped in the background @ECHO App Pools will be stopped in the background >> %Envlog% %Date% %Time% Pause echo. IF EXIST "%Destination%" ( ECHO Deleting %AppName% %Folder% RMDIR %Destination% /s /q ECHO Destination Folder %Folder% Deleted ECHO Destination Folder %Folder% Deleted >> %Envlog% %Date% %Time% ) else ( ECHO Destination Folder %Destination% does not exist, please check ECHO Destination Folder %Destination% does not exist, please check >> %Envlog% %Date% %Time% Pause ) echo. @ECHO Starting Robocopy for %AppName% @ECHO Starting Robocopy for %AppName% >> %Envlog% %Date% %Time% echo. START /WAIT /MIN ROBOCOPY.EXE %Source% %Destination% *.* /S /NP /R:3 /W:5 /LOG:"Logs\Robo%AppName%%ENV%.log" D:\Tools\Windiff\windiff.exe %Source% %Destination% echo. @ECHO Finished with Robocopy @ECHO Finished with Robocopy >> %Envlog% %Date% %Time% echo. @ECHO Checking if App pools stopped: @ECHO Checking if App pools stopped: >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd LIST apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools stopped? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not stop @echo App pools stopped?: %ask% >> %Envlog% %Date% %Time% DEL %Source%\web.config echo. @ECHO Production Config check if exist "%Destination%\%ENV%-Web.config" ( echo. ECHO The Application production configuration file does exist. ECHO The Application production configuration file does exist. >> %Envlog% %Date% %Time% COPY %Destination%\%ENV%-Web.config web.config echo. ECHO Production %ENV%-Web.config has been renamed to web.config ECHO Production %ENV%-Web.config has been renamed to web.config >> %Envlog% %Date% %Time% ) else ( ECHO The Application production configuration file is missing in Production %AppName% ECHO The Application production configuration file is missing in Production %AppName% >> %Envlog% %Date% %Time% explorer %Destination% Pause ) echo. @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools >> %Envlog% %Date% %Time% Pause echo. @ECHO Start %AppName% Application Pool >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd START apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools started? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not start @echo App pools started?: %ask% >> %Envlog% %Date% %Time% Pause echo. @ECHO Build Version for %AppName% @ECHO Build Version for %AppName% >> %Envlog% %Date% %Time% type %Destination%\buildinfo.xml echo. ECHO ............................................... @ECHO ...........Deployment Compelted................ @ECHO ...........Deployment Compelted................>> %Envlog% %Date% %Time% ECHO ............................................... here are my issues: Lets say I am running code for 3 servers, then for each instance: For all three servers I am performing destination folder delete even so destination folder is always the same, the code should only delete it in the 1st instance (when code is deployed to first server) then I would prefer if script would check if the code from the source and destination is the same and if it is it should delete the folder or not. Then based on 1: a) deleting web.config and renaming should only happen if code in destination is new b) Robocopy should not override files if they are the same I think there is /Xo option to do that any idea how to achieve that? :)

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >