Search Results

Search found 24879 results on 996 pages for 'prime number'.

Page 597/996 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • JSF2 ResourceBundleLoader override?

    - by CDSO1
    I need to have resource messages that contain EL expressions be resolved when loaded from a ResourceBundle. Basically I have a number of properties files containing the text. Some of the text will look like the following: welcomeText=Welcome #{userbean.name} The only possible way I can see this working currently is implementing a custom taglib so that instead of saying: <f:loadBundle var="messages" basename="application.messages"/> I would have to use <mytaglib:loadBundle var="messages" basename="application.messages"/> #{messages.welcomeText} Given a user with username "User1", this should output Welcome User1 My implementation would then use a custom ResourceBundle class that would override handleGetObject, use the ELResolver to resolve variables etc....Ideas? suggestings? Implementations that are already available? I appreciate your assistance.

    Read the article

  • iPhone hitTest broken after rotation

    - by Adam
    Hi all, I have a UIView that contains a number of CALayer subclasses. I am using the following code to detect which layer a touch event corresponds to: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint touchPoint = [touch locationInView:self]; NSLog(@"%@,%@",NSStringFromCGPoint(point),[self.layer hitTest:point].name); } This works fine until the device is rotated. When the device is rotated all current layers are removed from the superlayer, and new CALayers are created to fit the new orientation. The new layers are correctly inserted and viewable in the correct orientation. After the rotation the hitTest method consistently returns nil when clearly clicking on the newly created layers and registers for locations of layers which are incorrect. The coordinates of the hit test are correct, but no layers are found. Am I missing a function call or something after handling the rotation? Cheers, Adam

    Read the article

  • How do I replace a modal view controller?

    - by Theory
    I'm using a modal view controller to allow a user to select an address book entry and email address. The ABPeoplePickerNavigationController object is displayed via presentModalViewController:animated: [self presentModalViewController:picker animated:YES]; What I want to do is keep the modal dialog up, but when the user selects the email address, it should cross-fade to a different controller that displays a message composition window. I've tried various approaches in peoplePickerNavigationController:shouldContinueAfterSelectingPerson:property:identifier: to dismiss the picker and set my custom composition controller as the modal view. I can do it any number of ways, but never does it fade smoothly from the picker to the composition controller -- unless I make the composition controller a modal dialog of the picker, in which case the picker re-appears when I dismiss the composition controller. I don't want that, either. There must be some way to smoothly replace one controller and its view with another controller and its view, all within the context of a modal dialog, and preferably with a cross fade. Suggestions greatly appreciated.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • How to redirect wordpress post links that are using post_id to new url struture of postname

    - by Matthew
    Hey guys, I have a slight problem... we made a change to our url structure the other day and have broken links all over. What I did was change links from: http://blog.mydomain.com/articles/123 To: http://blog.mydomain.com/articles/this-is-a-smaple Is there anyway to direct any links linking to the pages ID number to the postname???? The old url structure is still being published throughout our RSS feed on facebook. So I am trying to catch those people that are or maybe clicking on our links on our facebook posts and redirect them to that posts postname url? Does that make sense? Thanks and help would do

    Read the article

  • How to represent "options" for my plugin architecture (C# .NET WinForms)

    - by Joshua
    Okay basically here's where I'm at. I have a list of PropertyDescriptor objects. These describe the custom "Options" fields on my Plugins, aka: public class MyPlugin : PluginAbstract, IPlugin { [PluginOption("This controls the color of blah blah blah")] [DefaultValue(Color.Red)] public Color TheColor { get; set; } [PluginOption("The number of blah blah blahs")] [DefaultValue(10)] public int BlahBlahBlahs { get; set; } } So I did all the hard parts: I have all the descriptions, default values, names and types of these custom "plugin options". MY QUESTION IS: When a user loads a plugin, how should I represent these options for them to config? On the back end I'll be using XML for the config, so that's not what I'm asking. I'm asking on the front end: What kind of WinForms control should I use to let users configure the options of a plugin, when there will be an unknown amount of options and different types used etc.?

    Read the article

  • Database design - alternatives for Entity Attribute Value (EAV)

    - by Bob
    Hi, see http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters for similar topic. My question: i want to design a database, that will be used for a production facility of different types of products where each product has its own (number of) parameters. because i want the serial numbers to be in one tabel for overview purposes i have a problem with these different paraeters . One solution could be EAV, but it has its downsides, certainly because we have +- 5 products with every product +- 20.000 serial numbers (records). it looks a bit overkill to me... I just don't know how one could design a database so that you have an attribute in a mastertable that says: 'hey, you could find details of this record in THAT detail-table". 'in a way that you qould easely query the results) currenty i am using Visual Basic & Acces 2007. but i'm going to Visual Basic & MySQL. thanks for your help. Bob

    Read the article

  • Linear Regression and Java Dates

    - by Smithers
    I am trying to find the linear trend line for a set of data. The set contains pairs of dates (x values) and scores (y values). I am using a version of this code as the basis of my algorithm. The results I am getting are off by a few orders of magnitude. I assume that there is some problem with round off error or overflow because I am using Date's getTime method which gives you a huge number of milliseconds. Does anyone have a suggestion on how to minimize the errors and compute the correct results?

    Read the article

  • Selecting by observation in R table

    - by Stedy
    I was working through the Rosetta Code example of the knapsack problem in R and I came up with four solutions. What is the best way to output only one of the solutions based on the observation number give in the table? > values[values$value==max(values$value),] I II III value weight volume 2067 9 0 11 54500 25 25 2119 6 5 11 54500 25 25 2171 3 10 11 54500 25 25 2223 0 15 11 54500 25 25 My initial idea was to save this output to a new variable, then query the new variable such as: > newvariable <- values[values$value==max(values$value),] > newvariable2 <- newvariable[2,] > newvariable2 I II III value weight volume 2119 6 5 11 54500 25 25 This gives me the desired output but is there a cleaner way to do it?

    Read the article

  • Is PHP serialize function compatible UTF-8 ?

    - by Matthieu
    I have a site I want to migrate from ISO to UTF-8. I have a record in database indexed by the following primary key : s:22:"Informations générales"; The problem is, now (with UTF-8), when I serialize the string, I get : s:24:"Informations générales"; (notice the size of the string is now the number of bytes, not string length) So this is not compatible with non-utf8 previous records ! Did I do something wrong ? How could I fix this ? Thanks

    Read the article

  • PHP - Foreach in a foreach + CodeIgniter

    - by Sebhael
    I got an image upload function working, but there's 4 images required in this image. And my current function is simply just overwriting the image despite the number. I'm wondering how I would insert (preferably) a letter into my foreach function. $uploads = array($this->uploads->data()); foreach ( $uploads as $key[] => $value ) { $imagename = $cleaned . '-s' . $value['file_ext']; //outputs clean-name-s.ext } By the time all is said and done, 4 images will require this function. Leaving me with clean-name-sa.ext, clean-name-sb.ext, clean-name-sc.ext, and clean-name-sd.ext (numbers work as well). The form does require that all 4 fields are filled out before submission, so would I just create an array of my desired identifiers and create a foreach loop that outputs 4 different $imagenames? Or is there a simpler, more logical approach to this.

    Read the article

  • Using Nearest Neighbour Algorithm for image pattern recognition

    - by user293895
    So I want to be able to recognise patterns in images (such as a number 4), I have been reading about different algorithms and I would really like to use the Nearest Neighbour algorithm, it looks simple and I do understand it based on this tutorial: http://people.revoledu.com/kardi/tutorial/KNN/KNN_Numerical-example.html Problem is, although I understand how to use it to fill in missing data sets, I don't understand how I could use it as a pattern recognition tool to aim in Image Shape Recognition. Could someone please shed some light as to how this algorithm could work for pattern recognition? I have seen tutorials using OpenCV, however I don't really want to use this library as I have the ability to do the pre-processing myself, and it seems silly that I would implement this library just for what should be a simple nearest neighbour algorithm.

    Read the article

  • Common lisp error: "should be lambda expression"

    - by Zachary
    I just started learning Common Lisp a few days ago, and I'm trying to build a function that inserts a number into a tree. I'm getting an error, * - SYSTEM::%EXPAND-FORM: (CONS NIL LST) should be a lambda expression From googling around, it seems like this happens when you have too many sets of parenthesis, but after looking at this for an hour or so and changing things around, I can't figure out where I could be doing this. This is the code where it's happening: (defun insert (lst probe) (cond ((null lst) (cons probe lst)) ((equal (length lst) 1) (if (<= probe (first lst)) (cons probe lst) (append lst (list probe)))) ((equal (length lst) 2) ((cons nil lst) (append lst nil) (insertat nil lst 3) (cond ((<= probe (second lst)) (insert (first lst) probe)) ((> probe (fourth lst)) (insert (fifth lst) probe)) (t (insert (third lst) probe))))))) I'm pretty sure it's occurring after the ((equal (length lst) 2), where the idea is to insert an empty list into the existing list, then append an empty list onto the end, then insert an empty list into the middle.

    Read the article

  • SQL Profiler: Read/Write units

    - by Ian Boyd
    i've picked a query out of SQL Server Profiler that says it took 1,497 reads: EventClass: SQL:BatchCompleted TextData: SELECT Transactions.... CPU: 406 Reads: 1497 Writes: 0 Duration: 406 So i've taken this query into Query Analyzer, so i may try to reduce the number of reads. But when i turn on SET STATISTICS IO ON to see the IO activity for the query, i get nowhere close to one thousand reads: Table Scan Count Logical Reads =================== ========== ============= FintracTransactions 4 20 LCDs 2 4 LCTs 2 4 FintracTransacti... 0 0 Users 1 2 MALs 0 0 Patrons 0 0 Shifts 1 2 Cages 1 1 Windows 1 3 Logins 1 3 Sessions 1 6 Transactions 1 7 Which if i do my math right, there is a total of 51 reads; not 1,497. So i assume Reads in SQL Profiler is an arbitrary metric. Does anyone know the conversion of SQL Server Profiler Reads to IO Reads? See also SQL Profiler CPU / duration unit Query Analyzer VS. Query Profiler Reads, Writes, and Duration Discrepencies

    Read the article

  • JPQL / SQL: How to select * from a table with group by on a single column?

    - by DavidD
    I would like to select every column of a table, but want to have distinct values on a single attribute of my rows (City in the example). I don't want extra columns like counts or anything, just a limited number of results, and it seems like it is not possible to directly LIMIT results in a JPQL query. Original table: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA 3 | John | LA 4 | Albert | NY Wanted result, if I do the distinct on the City: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA What is the best way to do that? Thank you for your help.

    Read the article

  • How can I use the Fisher-Yates shuffle while ensuring my permutation is even?

    - by Mithrax
    I'm interested making an implementation of the 14-15 puzzle: I'm creating an array with the values 0 - 15 in increasing order: S = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 } Now, what I want to do is shuffle them to create a new instance of the puzzle. However, I know that if I create a board with an "odd permutation" than it is unsolvable. Wikipedia says I need to create the puzzle with an even permutation. I believe this means that I simply have to do ensure I do an even number of swaps? How would I modify Fisher-Yates so I ensure I end up with an even permutation at the end? If I do a swap for every element in the array that would be 16 swaps which I believe would be an even permutation. However, do I need to be concerned about swapping with itself? Is there any other way to ensure I have a valid puzzle?

    Read the article

  • Proxy Authentication Error while calling FedEx webservice

    - by Abdel Olakara
    Hi all, I am trying to call the FedEx tracking webservice. Currently I am running the sample application provided by FedEx itself (Added my test account number and other details). When I run the application, I get the following error: The remote server returned an error: (407) Proxy Authentication Required. I am inside a proxy at my organization and I tried provided the proxy server details to the webservice client using the WebProxy class as: trackService.Proxy = WebProxy.GetDefaultProxy(); and also by providing the proxy server details as: trackService.Proxy = new WebProxy("IP",8080); But I still keep getting the same error!! Could somebody help me how to resolve this problem? Thanks in advance, Regards, Abdel Olakara

    Read the article

  • Looking For a .NET Task Scheduling library

    - by Hounshell
    I'm looking for the following features: Scheduler uses SQL Server as the backing store Tasks can be scheduled by one application and executed by another I can have multiple applications, each of which handles a specific subset of tasks Tasks can be triggered at specific times, now, or based on the success or failure of other tasks Data can be attached to tasks There are a number of nice-to-have's, like a web management console, clustering/failover support, customizable logging, but they're not requirements. On the surface Quartz.NET has a nice interface and seems to fit the bill as it satisfies (1), (4 with some custom work) and (5), but I've been beating my head against (2) and (3) seems like I'd have to invest more effort than it's worth, especially given how convoluted the innards of it are. Any other libraries out there? Open source is preferred, with free as a close runner up. It's really hard to get management to pay for things like this when it's not their idea.

    Read the article

  • Unable to launch the ASP.NET Development server because port '1900' is in use.

    - by Shaul
    I don't know what has got into my computer today. I was developing just fine in VS 2008 and testing my ASP.NET web site on my development server. Then suddenly, out of the blue, I can't run my web site any more! As soon as I hit F5, the message appears: Unable to launch the ASP.NET Development server because port '1900' is in use. And it doesn't matter what port I change to, it's always in use! AAARRRGGGHH!!! I have tried: Changing the port number Restarting Visual Studio Rebooting my machine Installing IIS Clue: My IIS refuses to start. But I didn't have IIS installed when I was happily working earlier, so that is probably not the issue; it might just be highlighting something else... Thanks in advance... Update: after rebooting, IIS does start, but the problem here persists.

    Read the article

  • Are there any B-tree programs or sites that show visually how a B-tree works

    - by Phenom
    I found this website that lets you insert and delete items from a B-tree and shows you visually what the B-tree looks like: java b-tree I'm looking for another website or program similar to this. This site does not allow you to specify a B-tree of order 4 (4 pointers and 3 elements), it only lets you specify B-trees with an even number of elements. Also, if possible, I'd like to be able to insert letters instead of numbers. I think I actually found a different site but that was a while ago and can't find it anymore.

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Fluent composite foreign key mapping

    - by Fionn
    Hi, I wonder if this is possible to map the following with fluent nhibernate: A document table and a document_revision table will be the target tables. The document_revision table should have a composite unique key consisting of the document_id and the revision number (where the document_id is also the foreign key to the document table). class Document { Guid Id; //other members omitted } class DocumentRevision { Guid document_id; //Part one of the primary key and also foreign key to Document.Id int revision; //Part two of the primary key //other members omitted }

    Read the article

  • FullText Search using multiple tables in SQL

    - by Caesar
    Hi there. I have 3 tables, tblBook(BookID, ISBN, Title, Summary) tblAuthor(AuthorID, FullName) tblBookAuthor(BookAuthorID, BookID, AuthorID) tblBookAuthor allows for a single book to have multiple authors and an author may have written any number of books. I am using full text search to search for ranking base on a word: SET @Word = 'FORMSOF(INFLECTIONAL, "' + @Word + '")' SELECT COALESCE(ISBNResults.[KEY], TitleResults.[KEY], SummaryResults.[KEY]) AS [KEY], ISNULL(ISBNResults.Rank, 0) * 3 + ISNULL(TitleResults.Rank, 0) * 2 + ISNULL(SummaryResults.Rank, 0) AS Rank FROM CONTAINSTABLE(tblBook, ISBN, @Word, LANGUAGE 'English') AS ISBNResults FULL OUTER JOIN CONTAINSTABLE(tblBook, Title, @Word, LANGUAGE 'English') AS TitleResults ON ISBNResults.[KEY] = TitleResults.[KEY] FULL OUTER JOIN CONTAINSTABLE(tblBook, Summary, @Word, LANGUAGE 'English') AS SummaryResults ON ISBNResults.[KEY] = SummaryResults.[KEY] The above code works fine for just searching tblBook table. But now I would like to search also the table tblAuthor based on key word searched. Can you help me with this?

    Read the article

  • multicolumn sorting of WinForms DataGridView

    - by Bi
    I have a DataGridView in a windows form with 3 columns: Serial number, Name, and Date-Time. The Name column will always have either of the two values: "name1" or "name2". I need to sort these columns such that the grid displays all the rows with name values in a specific order (first display all the "name1" rows and then all the "name2" rows). Within the "name1" rows, I want the rows to be sorted by the Date-Time. Please note programmatically, all the 3 columns are strings. For example, if I have the rows: 01 |Name1 | 2010-05-05 10:00 PM 02 |Name2 | 2010-05-02 08:00 AM 03 |Name2 | 2010-05-01 08:00 AM 04 |Name1 | 2010-05-01 11:00 AM 05 |Name1 | 2010-05-04 07:00 AM needs to be sorted as 04 |Name1 | 2010-05-01 11:00 AM 05 |Name1 | 2010-05-04 07:00 AM 01 |Name1 | 2010-05-05 10:00 PM 03 |Name2 | 2010-05-01 08:00 AM 02 |Name2 | 2010-05-02 08:00 AM I am not sure how to go about using the below: myGrid.Sort(.....,ListSortDirection.Ascending)

    Read the article

  • iPhone Shell - is there any?

    - by alee
    While working on iphone security architecture, i came to know that i can run applications from other applications in iphone. referring to the following url http://iphonedevelopertips.com/cocoa/launching-other-apps-within-an-iphone-application.html for example, i can put a link in a website with following hyperlink skype:// will result skype to run and call at particular number. Now i have few concerns here. is there a shell running in background in iphone, so that it allows other application to run basic app commands. if the above statement is true then how can i enable or run commands directly into iphone shell? if above statements are false, then could you please explain how these commands are being executed? is this part of iPhone SDK? or this funcationality is iPhone OS

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >