Search Results

Search found 23427 results on 938 pages for 'christopher done'.

Page 465/938 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • PHP form class

    - by Oli
    I'm used to ASPNET and Django's methods of doing forms: nice object-orientated handlers, where you can specify regexes for validation and do everything in a very simple way. After months living happily without it, I've had to come back to PHP for a project and noticed that everything I used to do with PHP forms (manual output, manual validation, extreme pain) was utter rubbish. Is there a nice, simple and free class that does form generation and validation like it should be done? Clonefish has the right idea, but it's way off on the price tag.

    Read the article

  • MapView Annotation Callout action when opened

    - by Paul Peelen
    Hi, I have a mapview with serveral annotations. Every annotation has a leftCalloutAccessoryView which is a UIViewController class. The reason for this is that I want every annotation to load some data from the server, and add the result of that data to the annotation subTitle. This all works perfectly, except that I dont want to load all that data when my app is started, but I want to the remote call to be done only when the callout bubble is opened. Does anybody know how I can do this? The viewWillload, viewDidLoad ect. don't work in this case. Any examples as well? Best regards, Paul Peelen

    Read the article

  • How do I determine a best-fit distribution in java?

    - by Eadwacer
    I have a bunch of sets of data (between 50 to 500 points, each of which can take a positive integral value) and need to determine which distribution best describes them. I have done this manually for several of them, but need to automate this going forward. Some of the sets are completely modal (every datum has the value of 15), some are strongly modal or bimodal, some are bell-curves (often skewed and with differing degrees of kertosis/pointiness), some are roughly flat, and there are any number of other possible distributions (possion, power-law, etc.). I need a way to determine which distribution best describes the data and (ideally) also provides me with a fitness metric so that I know how confident I am in the analysis. Existing open-source libraries would be ideal, followed by well documented algorithms that I can implement myself.

    Read the article

  • Restore VisualSVN server from client copy.

    - by Kevin
    I am running VisualSVN on a windows VM box. The VM crashed and corrupted the image. After restoring an older image (2007) we discovered that our data backup is not functioning properly. Hence I have a bunch of projects (~20) siting on my laptop (client side) and I want to push them back into the VisualSVN Server, which is now empty. I know this can be done by simply adding the project files manually, but this is going to take along time because I don't want to include every file (i.e. complied files). Any suggestions would be greatly appreciated.

    Read the article

  • How to execute a osmdroid application?

    - by Rupesh Chavan
    Hello everyone, Can someone please tell me what are the steps to execute a osmdroid application. I have Eclipse installed on my machine along with ADT plugin and i have downloaded the source code and imported it in a eclipse. and tried to run it on Android 1.6 platform but somehow i am not getting the map instead i m just getting blank screen with grid in it. Are there any configurations has to be done which i am missing? Any help or any tutorial to execute osmdroid application will be appreciated. Thanks, Rupesh

    Read the article

  • Displaying Fourier transforms in OpenCV

    - by Simonw
    Hi, I'm just learning to use OpenCV and am having a problem with using DFT. I've done a signal processing class which used MatLab, so I'm trying to go through some of the exercises we did in that class. I'm trying to get and display the FT of an image, so I can mask some of the frequencies. I'd like to be able to see the FT, so I know how big to make the mask, but when I tried, I got an image like this: rather than like one of these Am I forgetting a step somewhere? I'm loading the image, converting its type to CV_32FC1, getting the matrix of it, getting the DFT, and then getting turning the resulting matrix back into an image. I'll post the code I'm using if it will be of any help? Or if someone has a link to an example of displaying the FT? I could only find ones which used it for the convolution. EDIT: Did I get the Phase of the image?

    Read the article

  • Correct way to load image into UIWebView from NSData object

    - by rustyshelf
    I have downloaded a gif image into an NSData object (I've checked the contents of the NSData object and it's definitely populated). Now I want to load that image into my UIWebView. I've tried the following: [webView loadData:imageData MIMEType:@"image/gif" textEncodingName:nil baseURL:nil]; but I get a blank UIWebView. Loading the image from the same URL directly works fine: NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:imageUrl]]; [imageView loadRequest:request]; Do I need to set the textEncodingName to something, or am I doing something else wrong? I want to load the image manually so I can report progress to the user, but it's an animated gif, so when it's done I want to show it in a UIWebView. Edit: Perhaps I need to wrap my image in HTML somehow? Is there a way to do this without having to save it to disk?

    Read the article

  • Etiquette for refactoring other people's sourcecode?

    - by Prutswonder
    Our team of software developers consists of a bunch of experienced programmers with a variety of programming styles and preferences. We do not have standards for everything, just the bare necessities to prevent total chaos. Recently, I bumped into some refactoring done by a colleague. My code looked somewhat like this: public Person CreateNewPerson(string firstName, string lastName) { var person = new Person() { FirstName = firstName, LastName = lastName }; return person; } Which was refactored to this: public Person CreateNewPerson (string firstName, string lastName) { Person person = new Person (); person.FirstName = firstName; person.LastName = lastName; return person; } Just because my colleague needed to update some other method in one of the classes I wrote, he also "refactored" the method above. For the record, he's one of those developers that despises syntactic sugar and uses a different bracket placement/identation scheme than the rest of us. My question is: What is the (C#) programmer's etiquette for refactoring other people's sourcecode (both semantic and syntactic)?

    Read the article

  • Resources for UnrealScript

    - by Blaenk
    Now that the Unreal Development Kit is out and free to use by anyone, I am pretty excited to try it out. My understanding is that the programming is done through scripting in UnrealScript, I am wondering if any of you guys know of any good articles, tutorials, books, and references for Unreal Script or the Unreal Development Kit. Documentation UnrealScript Reference for Unreal Engine 3 UnrealScript at UnrealWiki Tools nFringe - Visual Studio Extension for UnrealScript Setting up an nFringe UDK project Tutorials Chimeric - Coding tutorials Video Tutorials 3D Buzz Video Tutorials Sorry if I screwed up on this. It's my first community wiki post, let me know if I did something wrong :)

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Web Page Rendering Capture

    - by Chaitanya
    I start with describing the problem itself. Rather than a problem I'm looking for a better solution. I have a asp.net page which has a bunch of images and a link underneath it, Each image is infact the latest rendering of the link underneath it. I scheduled a bat script which runs every hour to fetch the images through IECapt a web page rendering capture utility. One thing am annoyed about this utility is it takes a lot of time for the 20 images I have and for few because of the flash content it misses to take the actual screenshot of the website. Now I like to know can this rendering be done by traditional programming am not interested in using any utilities. I'm interested in trying this. The solution need not be necessarily a C# based am ready to try in any other language. Because it gives me a chance to learn. Thank you.

    Read the article

  • Notification between J2EE components.

    - by Pratik
    Hi There! I have a design problem . My application has multiple J2EE components ,In simple terms one acts as a service provider(Non UI) and others are consumers(UI webapp) . The consumer gets the configuration data from the service provider(this basically reads the data from DB) during the start up and stores it in the Cache. The cache gets refreshed after periodic time to reflect any changes done at the database. The Problem Apart from the cache refresh ,I also want to notify the consumers when someone changes the DB . that configuration has been changed please reload it. What notification mechanism's can I use to achieve this. Thanks! Pratik

    Read the article

  • Visual C#, Large Arrays, and LOH Fragmentation. What is the accepted convention?

    - by Gorchestopher H
    I have an other active question HERE regarding some hopeless memory issues that possibly involve LOH Fragmentation among possibly other unknowns. What my question now is, what is the accepted way of doing things? If my app needs to be done in Visual C#, and needs to deal with large arrays to the tune of int[4000000], how can I not be doomed by the garbage collector's refusal to deal with the LOH? It would seem that I am forced to make any large arrays global, and never use the word "new" around any of them. So, I'm left with ungraceful global arrays with "maxindex" variables instead of neatly sized arrays that get passed around by functions. I've always been told that this was bad practice. What alternative is there? Is there some kind of function to the tune of System.GC.CollectLOH("Seriously") ? Are there possibly some way to outsource garbage collection to something other than System.GC? Anyway, what are the generally accepted rules for dealing with large (85Kb) variables?

    Read the article

  • Prevent form field to be edited without disabling field

    - by Erick
    I am currently working on a page that has a date picker for one of the field. One of the requirements by my client is to prevent the user from editing the field manually. Only the date picker would be possible to be used. (jQuery DatePicker) I had in mind to disable the field and use an hidden field to store the data (disabled from object ton send data on post). This sounds a bit wacky for something that could be done by javascript I'm pretty sure. So the big question, in javascript is it possible to prevent manual edition of a field without stopping datepicker plugin?

    Read the article

  • The Difference between Sharepoint Document Title and Document Name

    I am sorry to ask you guys such a low-level question, but I really can not find out the answer. Hope some mad dogs will not bark for my question. A colleague of mine developed a workflow which auto set title to document. With this workflow,as he put it, he can optimize the research and lookup things like that. However I think it can be done just by name of document. There are must be some kind of story behind this. Could someone help me here? Thanks!

    Read the article

  • VS2010 add-in custom menu item in files of Solution Explorer

    - by NewProgrammer
    Hey guys, I need to create a custom menu item for Visual Studio 2010 Add-in in C#, but I have had no luck in coming with a solution anytime soon. I am aware that there was a similar post: http://stackoverflow.com/questions/2486818/visual-studio-add-in-adding-a-context-menu-item-to-solution-explorer But it did not help, as the blog follows the process through the integration package, and the video is done through VB. I had attempted to convert the VB syntax to the C# syntax, but about half way through the video, the Add-in methods has significantly changed from 2005 to 2010, as some of the methods have removed or changed. Is there any good tutorials on making a custom menu item on the Solution Explorer in the lastest Visual Studio in C#, and is there a good website that could be used as a reference for looking over the VS add-in API? I've used Microsoft's main website, however it is confusing and wordy, which is difficult to understand and find the methods, properties, or commands that I am looking for. Any help would be appreicated, Thanks in advance.

    Read the article

  • Using clang to analyze C++ code

    - by aneccodeal
    We want to do some fairly simple analysis of user's C++ code and then use that information to instrument their code (basically regen their code with a bit of instrumentation code) so that the user can run a dynamic analysis of their code and get stats on things like ranges of values of certain numeric types. clang should be able to handle enough C++ now to handle the kind of code our users would be throwing at it - and since clang's C++ coverage is continuously improving by the time we're done it'll be even better. So how does one go about using clang like this as a standalone parser? We're thinking we could just generate an AST and then walk it looking for objects of the classes we're interested in tracking. Would be interested in hearing from others who are using clang without LLVM.

    Read the article

  • Transpose a file in bash

    - by Thrawn
    Hi all, I have a huge tab-separated file formatted like this X column1 column2 column3 row1 0 1 2 row2 3 4 5 row3 6 7 8 row4 9 10 11 I would like to transpose it in an efficient way using only using commands (I could write a ten or so lines Perl script to do that, but it should be slower to execute than the native bash functions). So the output should look like X row1 row2 row3 row4 column1 0 3 6 9 column2 1 4 7 10 column3 2 5 8 11 I thought of a solution like this cols=`head -n 1 input | wc -w` for (( i=1; i <= $cols; i++)) do cut -f $i input | tr $'\n' $'\t' | sed -e "s/\t$/\n/g" >> output done But it's slow and doesn't seem the most efficient solution. I've seen a solution for vi in this post, but it's still over-slow. Any thoughts/suggestions/brilliant ideas? :-)

    Read the article

  • iTextSharp and VBnet question about headers

    - by bpSz
    Hello! I'm wondering how can I put a header into my pdf file, cause i've tried the tutorials from here: http://itextsharp.sourceforge.net/tutorial/ch04.html And it have not worked. Ive done this: Dim head As New HeaderFooter(New Phrase("This is page: "), False) head.Border = Rectangle.NO_BORDER document.Header = head But VS2008 says that HeaderFooter it's not defined (line 1), and Footer it's not a member of "iTextSharp.text.document" (line 3). I've already included the imports at the begining of my code and i don't have any other problems with the iTextsharps (i mean that it is working apart of the header problem): Imports iTextSharp.text Imports iTextSharp.text.pdf Imports System.Data.SQLite Imports System.IO So please, can anyone explain to me how can i set a header for my pages? Regards

    Read the article

  • C# - Can FileHelper FieldConverter routines refer to other fields in the record?

    - by Pete
    I am using the excellent FileHelpers library to process a fixed-length airline schedule file. I have a date field, then a few fields later on in the record, a time field. I want to combine both of these in the FileHelpers record class, and know there is a custom FieldConverter attribute. With this attribute, you provide a custom function to handle your field data and implement StringToField and FieldToString. My question is: can I pass other fields (already read) to this customer FieldConverter too, so I can combine Date and Time together. FieldConverter has an implementation that allows you to refer to both a custom processing class AND 'other strings' or even an array of object. But, given this is done in the attribute definition, I am struggling to access this earlier-field reference. [FieldFixedLength(4)] [FieldConverter(typeof(MyTimeConverter),"eg. ScheduledDepartureDate")] public DateTime scheduledDepartureTime;

    Read the article

  • Designing for the future

    - by Dennis Vroegop
    User interfaces and user experience design is a fast moving field. It’s something that changes pretty quick: what feels fresh today will look outdated tomorrow. I remember the day I first got a beta version of Windows 95 and I felt swept away by the user interface of the OS. It felt so modern! If I look back now, it feels old. Well, it should: the design is 17 years old which is an eternity in our field. Of course, this is not limited to UI. Same goes for many industries. I want you to think back of the cars that amazed you when you were in your teens (if you are in your teens then this may not apply to you). Didn’t they feel like part of the future? Didn’t you think that this was the ultimate in designs? And aren’t those designs hopelessly outdated today (again, depending on your age, it may just be me)? Let’s review the Win95 design: And let’s compare that to Windows 7: There are so many differences here, I wouldn’t even know where to start explaining them. The general feeling however is one of more usability: studies have shown Windows 7 is much easier to understand for new users than the older versions of Windows did. Of course, experienced Windows users didn’t like it: people are usually afraid of changes and like to stick to what they know. But for new users this was a huge improvement. And that is what UX design is all about: make a product easier to use, with less training required and make users feel more productive. Still, there are areas where this doesn’t hold up. There are plenty examples of designs from the past that are still fresh today. But if you look closely at them, you’ll notice some subtle differences. This differences are what keep the designs fresh. A good example is the signs you’ll find on the road. They haven’t changed much over the years (otherwise people wouldn’t recognize them anymore) but they have been changing gradually to reflect changes in traffic. The same goes for computer interfaces. With each new product or version of a product, the UI and UX is changed gradually. Every now and then however, a bigger change is needed. Just think about the introduction of the Ribbon in Microsoft Office 2007: the whole UI was redesigned. A lot of old users (not in age, but in times of using older versions) didn’t like it a bit, but new users or casual users seem to be more efficient using the product. Which, of course, is exactly the reason behind the changes. I believe that a big engine behind the changes in User Experience design has been the web. In the old days (i.e. before the explosion of the internet) user interface design in Windows applications was limited to choosing the margins between your battleship gray buttons. When the web came along, and especially the web 2.0 where the browsers started to act more and more as application platforms, designers stepped in and made a huge impact. In the browser, they could do whatever they wanted. In the beginning this was limited to the darn blink tag but gradually people really started to think about UX. Even more so: the design of the UI and the whole experience was taken away from the developers and put into the hands of people who knew what they were doing: UX designers. This caused some problems. Everyone who has done a web project in the early 2000’s must have had the same experience: the designers give you a set of Photoshop files and tell you to translate it to HTML. Which, of course, is very hard to do. However, with new tooling and new standards this became much easier. The latest version of HTML and CSS has taken the responsibility for the design away from the developers and placed them in the capable hands of the designers. And that’s where that responsibility belongs, after all, I don’t want a designer to muck around in my c# code just as much as he or she doesn’t want me to poke in the sites style definitions. This change in responsibilities resulted in good looking but more important: better thought out user interfaces in websites. And when websites became more and more interactive, people started to expect the same sort of look and feel from their desktop applications. But that didn’t really happen. Most business applications still have that battleship gray look and feel. Ok, they may use a different color but we’re not talking colors here but usability. Now, you may not be able to read the Dutch captions, but even if you did you wouldn’t understand what was going on. At least, not when you first see it. You have to scan the screen, read all the labels, see how they are related to the other elements on the screen and then figure out what they do. If you’re an experienced user of this application however, this might be a good thing: you know what to do and you get all the information you need in one single screen. But for most applications this isn’t the case. A lot of people only use their computer for a limited time a day (a weird concept for me, but it happens) and need it to get something done and then get on with their lives. For them, a user interface experience like the above isn’t working. (disclaimer: I just picked a screenshot, I am not saying this is bad software but it is an example of about 95% of the Windows applications out there). For the knowledge worker, this isn’t a problem. They use one or two systems and they know exactly what they need to do to achieve their goal. They don’t want any clutter on their screen that distracts them from their task, they just want to be as efficient as possible. When they know the systems they are very productive. The point is, how long does it take to become productive? And: could they be even more productive if the UX was better? Are there things missing that they don’t know about? Are there better ways to achieve what they want to achieve? Also: could a system be designed in such a way that it is not only much more easy to work with but also less tiring? in the example above you need to switch between the keyboard and mouse a lot, something that we now know can be very tiring. The goal of most applications (being client apps or websites on any kind of device) is to provide information. Information is data that when given to the right people, on the right time, in the right place and when it is correct adds value for that person (please, remember that definition: I still hear the statement “the information was wrong” which doesn’t make sense: data can be wrong, information cannot be). So if a system provides data, how can we make sure the chances of becoming information is as high as possible? A good example of a well thought-out system that attempts this is the Zune client. It is a very good application, and I think the UX is much better than it’s main competitor iTunes. Have a look at both: On the left you see the iTunes screenshot, on the right the Zune. As you notice, the Zune screen has more images but less chrome (chrome being visuals not part of the data you want to show, i.e. edges around buttons). The whole thing is text oriented or image oriented, where that text or image is part of the information you need. What is important is big, what’s less important is smaller. Yet, everything you need to know at that point is present and your attention is drawn immediately to what you’re trying to achieve: information about music. You can easily switch between the content on your machine and content on your Zune player but clicking on the image of the player. But if you didn’t know that, you’d find out soon enough: the whole UX is designed in such a way that it invites you to play around. So sooner or later (probably sooner) you’d click on that image and you would see what it does. In the iTunes version it’s harder to find: the discoverability is a lot lower. For inexperienced people the Zune player feels much more natural than the iTunes player, and they get up to speed a lot faster. How does this all work? Why is this UX better? The answer lies in a project from Microsoft with the codename (it seems to be becoming the official name though) “Metro”. Metro is a design language, based on certain principles. When they thought about UX they took a good long look around them and went out in search of metaphors. And they found them. The team noticed that signage in streets, airports, roads, buildings and so on are usually very clear and very precise. These signs give you the information you need and nothing more. It’s simple, clearly understood and fast to understand. A good example are airport signs. Airports can be intimidating places, especially for the non-experienced traveler. In the early 1990’s Amsterdam Airport Schiphol decided to redesign all the signage to make the traveller feel less disoriented. They developed a set of guidelines for signs and implemented those. Soon, most airports around the world adopted these ideas and you see variations of the Dutch signs everywhere on the globe. The signs are text-oriented. Yes, there are icons explaining what it all means for the people who can’t read or don’t understand the language, but the basic sign language is text. It’s clear, it’s high-contrast and it’s easy to understand. One look at the sign and you know where to go. The only thing I don’t like is the green sign pointing to the emergency exit, but since this is the default style for emergency exits I understand why they did this. If you look at the Zune UI again, you’ll notice the similarities. Text oriented, little or no icons, clear usage of fonts and all the information you need. This design language has a set of principles: Clean, light, open and fast Content, not chrome Soulful and alive These are just a couple of the principles, you can read the whole philosophy behind Metro for Windows Phone 7 here. These ideas seem to work. I love my Windows Phone 7. It’s easy to use, it’s clear, there’s no clutter that I do not need. It works for me. And I noticed it works for a lot of other people as well, especially people who aren’t as proficient with computers as I am. You see these ideas in a lot other places. Corning, a manufacturer of glass, has made a video of possible usages of their products. It’s their glimpse into the future. You’ll notice that a lot of the UI in the screens look a lot like what Microsoft is doing with Metro (not coincidentally Corning is the supplier for the Gorilla glass display surface on the new SUR40 device (or Surface v2.0 as a lot of people call it)). The idea behind this vision is that data should be available everywhere where you it. Systems should be available at all times and data is presented in a clear and light manner so that you can turn that data into information. You don’t need a lot of fancy animations that only distract from the data. You want the data and you want it fast. Have a look at this truly inspiring video that made: This is what I believe the future will look like. Of course, not everything is possible, or even desirable. But it is a nice way to think about the future . I feel very strongly about designing applications in such a way that they add value to the user. Designing applications that turn data into information. Applications that make the user feel happy to use them. So… when are you going to drop the battleship-gray designs? Tags van Technorati: surface,design,windows phone 7,wp7,metro

    Read the article

  • How do I present Prism module views?

    - by Maciek
    Heya, I'm writing a prism application, I've just created my 1st module, fired it all up and amazingly - it works. The application is going to grow soon(TM), and I'll be facing the need to host those modules in separate GUI elements. What type of GUI elements would you recommend to host the modules? Is it possible to data-bind a module to some control like a tab-control? How is it done? Is there some kind of a dock manager (similiar to AvalonDock) for Silverlight?

    Read the article

  • ExtJS: Login with 'Remember me' functionality

    - by Chau
    I'm trying to create a simple login window with the very common 'Remember me' functionality. The login validation is done AJAX style, thus the browser won't remember my input. My approach is to use the built-in state functionality, but how to use it confuses me. Ext.state.Manager.setProvider(new Ext.state.CookieProvider({ expires: new Date(new Date().getTime()+(1000*60*60*24*7)), //7 days from now })); ... { xtype: 'textfield', fieldLabel: 'User name', id: 'txt-username', stateful: true, stateId: 'username' }, { xtype: 'textfield', fieldLabel: 'Password', id: 'txt-password', inputType: 'password', stateful: true, stateId: 'password' }, { xtype: 'button', text: 'Validate', stateEvents: 'click' } I know I have to implement the getState method, but on what component (my guess is on the two textfields)? Another thing I fail to realize is, how is my click event on the button connected to the state properties of my textfields?

    Read the article

  • Linq-to-SQL statement issue

    - by Anicho
    I am basically looking to bind a search query to a gridview which is nice, but this must be done by a users input query (sort of like a search function). I can get single values and rows returned, but how would I get it to search all columns in my database for the inputted values and return it? My code so far is: Void SearchFunction() { TiamoDataContext context = new TiamoDataContext(); var search = from p in context.UserProfiles where = p.DanceType == UserSearchString select p; UserSearchGrid.DataSource = search; UserSearchGrid.DataBind(); } I tried p.equals but am pretty sure thats not the way to go about it.

    Read the article

  • Are the ASP.net __EVENTTARGET and __EVENTARGUMENT susceptible to SQL injection?

    - by Schleichermann
    A security review was done against one of our ASP.net applications and returned in the test results was a SQL Injection Exposures considered to be a high risk item. The test that was performed passed a SQL statement as the value of the __EVENTTARGET and the __EVENTARGUMENT. I am wondering since these 2 values are ASP.net auto-generated hidden fields used for the Auto-Postback feature of the framework and hold information specific to the controls initiating the postback, is there really the potential for SQL injection if you are never manually calling and or pulling values out of these parameters in your code behind?

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >