Search Results

Search found 13243 results on 530 pages for 'lookup field'.

Page 78/530 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How do I create a custom text field in Tapestry5 that renders some Javascript onto the page?

    - by shane87
    I have been trying to create a custom textfield in tapestry which will render some javascript when it gains focus. But I have been having trouble trying to find an example of this. Here is some of the code i have started off with: package asc.components; import org.apache.tapestry5.ComponentResources; import org.apache.tapestry5.Field; import org.apache.tapestry5.annotations.Parameter; import org.apache.tapestry5.ioc.annotations.Inject; import org.apache.tapestry5.services.ComponentDefaultProvider; public class DahserTextField implements Field { @Parameter (defaultPrefix = "literal") private String label; @Inject private ComponentResources resources; @Inject private ComponentDefaultProvider defaultProvider; @Parameter private boolean disabled; @Parameter private boolean required; String defaultLabel(){ return defaultProvider.defaultLabel(resources); } public String getControlName() { return null; } public String getLabel() { return label; } public boolean isDisabled() { return disabled; } public boolean isRequired() { return required; } public String getClientId() { return resources.getId(); } } I have been unsure on what to do next. I do not know what to put into the .tml file. I would be grateful if anyone could help or point me in the right direction.

    Read the article

  • Set value of hidden field in a form using jQuery's ".val()" doesn't work !

    - by texens
    I've been trying to set the value of a hidden field in a form using jQuery, but without success. Here is a sample code that explains the problem. If I keep the input type to "text", it works without any trouble. But, changing the input type to "hidden", doesn't work ! <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("button").click(function(){ $("input:text#texens").val("tinkumaster"); }); }); </script> </head> <body> <p>Name: <input type="hidden" id="texens" name="user" value="texens" /></p> <button>Change value for the text field</button> </body> </html> I also tried the following workaround, by setting the input type to "text" and then using a "display:none" style for the input box. But, this also fails ! It seems jQuery has some trouble setting hidden or invisible input fields. Any ideas? Is there a workaround for this that actually works? Thanking in anticipation

    Read the article

  • Memory leak when changing Text field of a Scintilla object.

    - by PlaZmaZ
    I have a relatively large program that I'm optimizing for ASCII input files around 10-80mB in size. The program reads every line of the file into a stringbuilder and then sets the Text field of the ScintillNET object to the stringbuilder. The stringbuilder is then set to null. private void ReloadFile(string sFile) { txt_log.ResetText(); try { StringBuilder sLine = new StringBuilder(""); using (StreamReader sr = new StreamReader(sFile)) { while (true) { string temp = sr.ReadLine(); if (temp == null) break; sLine.AppendLine(temp); } sr.Close(); } txt_log.Text = sLine.ToString(); sLine = null; } catch (Exception ex) { MessageBox.Show(this, "An error occurred opening this file.\n\n" + ex.Message, "File Open Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } GC.Collect(); } The program has an option to reload or open a file. This is irrelevant, as any call to txt_log.Text seems to not get rid of the previous memory used for the .Text field. Commenting out the txt_log.Text line gives proper memory behavior. The GC.Collect() line seems pointless, and I have tried both with and without it. Is there something I'm missing here? I HIGHLY doubt it's a problem with the ScintillaNET component itself--rather something in this code.

    Read the article

  • How to remove cursor from text field on click of button?

    - by Miraaj
    Hi all, I am trying to do a simple task: I have an editable text field, two buttons (titles: make editable/ make un-editable) over a window. Idea is: when user clicks "make editable" button, text field should become editable and when he/she clicks "make un-editable", it should become un-editable. In action of "make un-editable" I am doing this: [myTextField setSelectable:NO]; [myTextField setEditable:NO]; and in action of "make editable" I am doing this: [myTextField setSelectable:YES]; [myTextField setEditable:YES]; Problem is: It works fine when myTextField does not have cursor within it and user clicks - "make un-editable", ie. myTextField becomes un-editable but when it has cursor and user clicks "make un-editable" he/she can still edit myTextField. For its solution I tried to remove cursor from myTextField as soon as user clicks "make un-editable" button, by adding these lines before selectable and editable statements: [someOtherTextField selectText:self]; [[NSRunLoop currentRunLoop] performSelector:@selector(selectText:) someOtherTextField argument:self order:9999 modes:[NSArray arrayWithObject:NSDefaultRunLoopMode]]; [someOtherTextField becomeFirstResponder]; but none is working for me :( Can anyone suggest some solution for it? Thanks, Miraaj

    Read the article

  • How Do I Escape Apostrophes in Field Valued in SQL Server?

    - by Mikecancook
    I asked a question a couple days ago about creating INSERTs by running a SELECT to move data to another server. That worked great until I ran into a table that has full on HTML and apostrophes in it. What's the best way to deal with this? Lucking there aren't too many rows so it is feasible as a last resort to 'copy and paste'. But, eventually I will need to do this and the table by that time will probably be way too big to copy and paste these HTML fields. This is what I have now: select 'Insert into userwidget ([Type],[UserName],[Title],[Description],[Data],[HtmlOutput],[DisplayOrder],[RealTime],[SubDisplayOrder]) VALUES (' + ISNULL('N'''+Convert(varchar(8000),Type)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Username)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Title)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Description)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Data)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),HTMLOutput)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),DisplayOrder)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),RealTime)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),SubDisplayOrder)+'''','NULL') + ')' from userwidget Which is works fine except those pesky apostrophes in the HTMLOutput field. Can I escape them by having the query double up on the apostrophes or is there a way of encoding the field result so it won't matter?

    Read the article

  • how to check null value of Integer type field in ASP.NET MVC view?

    - by Vikas
    Hi, I have integer type field in database which is having property "Not Null". when i create a view & do a validation, if i left that field blank, it will consider it as 0 so i can not compare it with 0 because if someone insert a value 0 then it will be considered as error! one another problem is that i am using Model error as described in the book "ASP.NET MVC 1.0" @ Scott Gu blog. And I am checking the value in partial class of object (created by LINQ-To-SQL). i.e public partial class Person { public bool IsValid { get { return (GetRuleViolations().Count() == 0); } } public IEnumerable<RuleViolation> GetRuleViolations() { if (String.IsNullOrEmpty(Name)) yield return new RuleViolation("Name is Required", "Name"); if (Age == 0) yield return new RuleViolation("Age is Required", "Age"); yield break; } partial void OnValidate(ChangeAction action) { if (!IsValid) throw new ApplicationException("Rule violations prevent saving"); } } There is also problem with range. Like in database if i declared as smallint i.e. short in c#, now if i exceed that range then it gives error as "A Value is reguired". so finally is there any best way for validation in ASP.NET MVC?

    Read the article

  • how to diffrentiate between same field names of two tables in a select query??

    - by developer
    i have more than two tables in my database and all of them contains same field names like table A table B table C field1 field1 field1 field2 field2 field2 field3 field3 field3 . . . . . . . . . . . . I have to write a SELECT query which gets almost all same fields from these 3 tables.Iam using something like this :- select a.field1,a.field2,a.field3,b.field1,b.field2,b.field3,c.field1,c.field2,c.field3 from table A as a, table B as b,table C as c where so and so. but when i print field1's value it gives me the last table values. How can i get all the values of three tables with the same field names??? do i have to write individual query for every table OR there is any ways of fetching them all in a single query????

    Read the article

  • When empty field comes, removed the row in the Grouped Table view in iPhone?

    - by Pugal Devan
    Hi friends, I have displayed the datas in grouped table view. The data's are displayed in the table view from XML parsing. I have 2 section of the table view, the section one has three rows and section two has two rows. section 1 -> 3 Rows section 2 - > 2 Rows. Now i want to check, if anyone of the string is empty then i should remove the empty cells, so i have faced some problems, if i have removed any empty cell, then it will changed the index number. So how can i check, anyone of the field is empty?, Because some times more number of empty field will come, so that the index position will be change. So please send me any sample code or link for that? How can i achieve this? Sample code, - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (section == 0) { if([userEmail isEqualToString:@" "] || [phoneNumber isEqualToString:@" "] || [firstName isEqualToString:@" "]) { return 2; } else { return 3; } } if (section == 1) { if(![gradYear isEqualToString:@" "] || ![graduate isEqualToString:@" "]) { return 1; } else { return 2; } return 0; } Please Help me out!!! Thanks.

    Read the article

  • How can I integrate advanced computations into a database field?

    - by ciclistadan
    My biological research involves the measurement of a cellular structure as it changes length throughout the course of observation (capturing images every minute for several hours). As my data sets have become larger I am trying to store them in an Access database, from which I would like to perform various queries about their changes in size. I know that the SELECT statement can incorporate some mathematical permutations, but I have been unable to incorporate many of my necessary calculations (probably due to my lack of knowledge). For example, one calculation involves determining the rate of change during specifically defined periods of growth. This calculation is entirely dependent on the raw data saved in the table, therefore I didn't this it would be appropriate to just calculate it in excel prior to entry into the field. So my question is, what would be the most appropriate method of performing this calculation. Should I attempt to string together a huge SELECT calculation in my QUERY, or is there a way to use another language (I know perl?) which can be called to populate the new query field? I'm not looking for someone to write the code, just where is it appropriate to incorporate each step. Also, I am currently using Office Access but would be interested in any mySQL answers as I may be moving to this platform at a later date. Thanks all!

    Read the article

  • setting up git on cygwin - openssl

    - by Pete Field
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • How to display workflow related tasks in the item display page where the workflow is currently running on in SharePoint2013

    - by ybbest
    In one of the project, I need to display workflow related tasks in the item display page where the workflow is currently running on. To achieve this, I’d like to add the tasks list view web part and using the connected web part to achieve this.(ID=workflowitemid) However, to make it work I need to unhide the workflowitemid field in the task list, as it is hidden field and also cantogglehidden field is set to false. I need to use reflection to change the cantogglehidden field to true as it only has getter in the API and then I am able to unhide the field. You can download the script here. However, it is not ideal (make your environment not supported by Microsoft) to display tasks this way. Another way to display the related task is to use SharePoint designer solution with List view web part and data source. Here are the steps. 1. Create a new list display form as below 2. Edit the custom display form in advanced mode. 3. Find the PlaceHolderMain contentplace hoder and insert the DataView by choosing the associated workflow tasks list as below 4. Go to the List View Tools >> OPTIONS 5. Create a Parameter called workflowitemId Parameter which retrieve the value from the ID querystring as below 6. Create a filter based on UIVersion = workflowitemId as below ,we are going to change the UIVersion to WorkflowItemId property later as WorkflowItemId is a hidden field and cannot be selected from the wizard. 7. Replace UIVersion with WorkflowItemId in the caml for the XsltListViewWebPart. From: TO 8. Go to the new custom display page at http://yourserver/Lists/aa/CustomDisplayPage.aspx?ID=414, you will see the associated tasks are showing in the page. References: http://office.microsoft.com/en-us/sharepoint-designer-help/watch-this-design-a-document-review-workflow-solution-HA010256417.aspx (Video 12 and 13)

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • Why values in my WCF data contract were suddenly wrong...

    - by mipsen
    A WCF Service I provided took a very simple data contract as parameter (containing one string and one int...) and had a very simple task to do. A .NET 3.5 client was created using the VS2008 feature "Add Service Reference". Everything worked as expected. Then a slight change came in: The client was expected to run on machines with .NET 2.0 only. So we set the Target  Framework to .NET 2.0, removed the references to System.ServiceModel, System.Runtime.Serialization and the ServiceReference and created a new Reference to the Service using the old "Add Web Reference" . A matter of 2 minutes.  When testing, the int value in the data contract arriving at the WCF Service suddenly was 0, instead of 38 as we expected. What happened? When generating an old  Web Reference on a WCF data contract an additional boolean field for each value-type field is created called [Fieldname]Specified (e.g. AgeSpecified) which defaults to "false". WCF inspects these boolean fields to determine if a value was provided for the value-type field. If the "Specified"-field is "false", WCF translates that to using the default-value of the value-type field. For int this is 0. So we had to insert  setting the "Specified"-field  for the int-value to "true" and everything was fine again. That was what we forgot after setting the Framework-version to 2.0...

    Read the article

  • How to convert List to Datatable in vb.net

    - by Samir R. Bhogayta
    Public Function ConvertToDataTable(Of T)(ByVal list As IList(Of T)) As DataTable        Dim table As New DataTable()        Dim fields() As FieldInfo = GetType(T).GetFields()        For Each field As FieldInfo In fields            table.Columns.Add(field.Name, field.FieldType)        Next        For Each item As T In list            Dim row As DataRow = table.NewRow()            For Each field As FieldInfo In fields                row(field.Name) = field.GetValue(item)            Next            table.Rows.Add(row)        Next        Return table    End Function

    Read the article

  • XamDataGrid Binding problem

    - by Mohammad Mostafizur Rahman
    I want to bind a cell of a XamDataGrid using ComboBox control through a collection's(CurrentEntity.INVTransactions) property(BatchList) but it does not work. I'm using mvvm pattern.In my code "BatchId" and "BatchList" are the properties of CurrentEntity.INVTransactions collection. would you please tell me why the comboBox of the xamDataGrid doesn't display the BatchList? sample code: <UserControl x:Class="PDCL.ERP.Modules.Inventory.Views.RequisitionList.RequisitionInfoUserControl" ...> <GroupBox Grid.Column="0" Grid.Row="1" Grid.ColumnSpan="2" Header="Details" VerticalAlignment="Top" Margin="5,0,5,0"> <Grid> <igDP:XamDataGrid Margin="2" DataSource="{Binding CurrentEntity.INVTransactions}" x:Name="requisitionDeailsGrid" InitializeRecord="requisitionDeailsGrid_InitializeRecord"> <igDP:XamDataGrid.FieldLayoutSettings> <igDP:FieldLayoutSettings HighlightAlternateRecords="True" AutoGenerateFields="False" AllowAddNew="True" AddNewRecordLocation="OnBottom" AutoFitMode="Always" SupportDataErrorInfo="RecordsAndCells" DataErrorDisplayMode="ErrorIcon" /> </igDP:XamDataGrid.FieldLayoutSettings> <igDP:XamDataGrid.FieldLayouts> <igDP:FieldLayout> <igDP:FieldLayout.Fields> <igDP:Field Name="Remarks" Label="Remarks" Width="Auto"> <igDP:Field.Settings> <igDP:FieldSettings AllowEdit="True" AllowResize="True"/> </igDP:Field.Settings> </igDP:Field> <igDP:Field Name="BatchId" Label="Batch" Width="Auto"> <igDP:Field.Settings> <igDP:FieldSettings EditorType="{x:Type igEditors:XamComboEditor}"> <igDP:FieldSettings.EditorStyle> <Style TargetType="{x:Type igEditors:XamComboEditor}"> <Setter Property="ItemsSource" Value="{Binding INVTransactions.BatchList, RelativeSource = {RelativeSource FindAncestor, AncestorType={x:Type igDP:XamDataGrid}, AncestorLevel=1}}" /> <Setter Property="DisplayMemberPath" Value="BatchName" /> <Setter Property="ValuePath" Value="BatchId" /> </Style> </igDP:FieldSettings.EditorStyle> </igDP:FieldSettings> </igDP:Field.Settings> </igDP:Field> <igDP:Field Name="Qty" Label="Qty Supplied" Width="Auto"> <igDP:Field.Settings> <igDP:FieldSettings AllowEdit="True" AllowResize="True"/> </igDP:Field.Settings> </igDP:Field> </igDP:FieldLayout.Fields> </igDP:FieldLayout> </igDP:XamDataGrid.FieldLayouts> </igDP:XamDataGrid> </Grid> </GroupBox> </UserControl> The output window shows the error "BindingExpression path error: 'INVTransactions' property not found on 'object' ''XamDataGrid' (Name='requisitionDeailsGrid')'. BindingExpression:Path=INVTransactions.BatchList; DataItem='XamDataGrid' (Name='requisitionDeailsGrid'); target element is 'XamComboEditor' (Name=''); target property is 'ItemsSource' (type 'IEnumerable')"

    Read the article

  • Java & Tomcat: SQL JDBC/JNDI Exceptions

    - by user267581
    I am deploying a webapp from eclipse to tomcat. I am having an issue with my application and JNDI lookups. When the app tries to load the JNDI resource I am this stacktrace: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused: connect] at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101) at javax.naming.InitialContext.lookup(InitialContext.java:396) at org.objectweb.carol.jndi.spi.AbsContext.lookup(AbsContext.java:134) at org.objectweb.carol.jndi.spi.AbsContext.lookup(AbsContext.java:144) at javax.naming.InitialContext.lookup(InitialContext.java:392) at org.objectweb.carol.jndi.spi.MultiContext.lookup(MultiContext.java:118) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.theriabook.daoflex.JDBCConnection.getDataSource(JDBCConnection.java:61) at com.theriabook.daoflex.JDBCConnection.getConnection(JDBCConnection.java:73) at com.aramark.data.UsersDAO.doLogin(UsersDAO.java:751) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at flex.messaging.services.remoting.adapters.JavaAdapter.invoke(JavaAdapter.java:406) at flex.messaging.services.RemotingService.serviceMessage(RemotingService.java:183) at flex.messaging.MessageBroker.routeMessageToService(MessageBroker.java:1417) at flex.messaging.endpoints.AbstractEndpoint.serviceMessage(AbstractEndpoint.java:878) at com.farata.remoting.CustomAMFEndpoint.serviceMessage(CustomAMFEndpoint.java:23) at flex.messaging.endpoints.amf.MessageBrokerFilter.invoke(MessageBrokerFilter.java:121) at flex.messaging.endpoints.amf.LegacyFilter.invoke(LegacyFilter.java:158) at flex.messaging.endpoints.amf.SessionFilter.invoke(SessionFilter.java:49) at flex.messaging.endpoints.amf.BatchProcessFilter.invoke(BatchProcessFilter.java:67) at flex.messaging.endpoints.amf.SerializationFilter.invoke(SerializationFilter.java:146) at flex.messaging.endpoints.BaseHTTPEndpoint.service(BaseHTTPEndpoint.java:274) at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:377) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.cti.compiler.env.web.CompilerInvocationInterceptor.doFilter(CompilerInvocationInterceptor.java:25) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) Caused by: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused: connect at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322) at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97) ... 41 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.<init>(Socket.java:372) at java.net.Socket.<init>(Socket.java:186) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) I am really stumped at this error. I am using a WinXP running on a Virtual Machine (VMWare) from a MAC. Any ideas? I have uninstalled/reinstalled tomcat multiple times.

    Read the article

  • A ToDynamic() Extension Method For Fluent Reflection

    - by Dixin
    Recently I needed to demonstrate some code with reflection, but I felt it inconvenient and tedious. To simplify the reflection coding, I created a ToDynamic() extension method. The source code can be downloaded from here. Problem One example for complex reflection is in LINQ to SQL. The DataContext class has a property Privider, and this Provider has an Execute() method, which executes the query expression and returns the result. Assume this Execute() needs to be invoked to query SQL Server database, then the following code will be expected: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // Executes the query. Here reflection is required, // because Provider, Execute(), and ReturnValue are not public members. IEnumerable<Product> results = database.Provider.Execute(query.Expression).ReturnValue; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } Of course, this code cannot compile. And, no one wants to write code like this. Again, this is just an example of complex reflection. using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider PropertyInfo providerProperty = database.GetType().GetProperty( "Provider", BindingFlags.NonPublic | BindingFlags.GetProperty | BindingFlags.Instance); object provider = providerProperty.GetValue(database, null); // database.Provider.Execute(query.Expression) // Here GetMethod() cannot be directly used, // because Execute() is a explicitly implemented interface method. Assembly assembly = Assembly.Load("System.Data.Linq"); Type providerType = assembly.GetTypes().SingleOrDefault( type => type.FullName == "System.Data.Linq.Provider.IProvider"); InterfaceMapping mapping = provider.GetType().GetInterfaceMap(providerType); MethodInfo executeMethod = mapping.InterfaceMethods.Single(method => method.Name == "Execute"); IExecuteResult executeResult = executeMethod.Invoke(provider, new object[] { query.Expression }) as IExecuteResult; // database.Provider.Execute(query.Expression).ReturnValue IEnumerable<Product> results = executeResult.ReturnValue as IEnumerable<Product>; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } This may be not straight forward enough. So here a solution will implement fluent reflection with a ToDynamic() extension method: IEnumerable<Product> results = database.ToDynamic() // Starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue; C# 4.0 dynamic In this kind of scenarios, it is easy to have dynamic in mind, which enables developer to write whatever code after a dot: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider dynamic dynamicDatabase = database; dynamic results = dynamicDatabase.Provider.Execute(query).ReturnValue; } This throws a RuntimeBinderException at runtime: 'System.Data.Linq.DataContext.Provider' is inaccessible due to its protection level. Here dynamic is able find the specified member. So the next thing is just writing some custom code to access the found member. .NET 4.0 DynamicObject, and DynamicWrapper<T> Where to put the custom code for dynamic? The answer is DynamicObject’s derived class. I first heard of DynamicObject from Anders Hejlsberg's video in PDC2008. It is very powerful, providing useful virtual methods to be overridden, like: TryGetMember() TrySetMember() TryInvokeMember() etc.  (In 2008 they are called GetMember, SetMember, etc., with different signature.) For example, if dynamicDatabase is a DynamicObject, then the following code: dynamicDatabase.Provider will invoke dynamicDatabase.TryGetMember() to do the actual work, where custom code can be put into. Now create a type to inherit DynamicObject: public class DynamicWrapper<T> : DynamicObject { private readonly bool _isValueType; private readonly Type _type; private T _value; // Not readonly, for value type scenarios. public DynamicWrapper(ref T value) // Uses ref in case of value type. { if (value == null) { throw new ArgumentNullException("value"); } this._value = value; this._type = value.GetType(); this._isValueType = this._type.IsValueType; } public override bool TryGetMember(GetMemberBinder binder, out object result) { // Searches in current type's public and non-public properties. PropertyInfo property = this._type.GetTypeProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in explicitly implemented properties for interface. MethodInfo method = this._type.GetInterfaceMethod(string.Concat("get_", binder.Name), null); if (method != null) { result = method.Invoke(this._value, null).ToDynamic(); return true; } // Searches in current type's public and non-public fields. FieldInfo field = this._type.GetTypeField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // Searches in base type's public and non-public properties. property = this._type.GetBaseProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in base type's public and non-public fields. field = this._type.GetBaseField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // The specified member is not found. result = null; return false; } // Other overridden methods are not listed. } In the above code, GetTypeProperty(), GetInterfaceMethod(), GetTypeField(), GetBaseProperty(), and GetBaseField() are extension methods for Type class. For example: internal static class TypeExtensions { internal static FieldInfo GetBaseField(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeField(name) ?? @base.GetBaseField(name); } internal static PropertyInfo GetBaseProperty(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeProperty(name) ?? @base.GetBaseProperty(name); } internal static MethodInfo GetInterfaceMethod(this Type type, string name, params object[] args) { return type.GetInterfaces().Select(type.GetInterfaceMap).SelectMany(mapping => mapping.TargetMethods) .FirstOrDefault( method => method.Name.Split('.').Last().Equals(name, StringComparison.Ordinal) && method.GetParameters().Count() == args.Length && method.GetParameters().Select( (parameter, index) => parameter.ParameterType.IsAssignableFrom(args[index].GetType())).Aggregate( true, (a, b) => a && b)); } internal static FieldInfo GetTypeField(this Type type, string name) { return type.GetFields( BindingFlags.GetField | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( field => field.Name.Equals(name, StringComparison.Ordinal)); } internal static PropertyInfo GetTypeProperty(this Type type, string name) { return type.GetProperties( BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( property => property.Name.Equals(name, StringComparison.Ordinal)); } // Other extension methods are not listed. } So now, when invoked, TryGetMember() searches the specified member and invoke it. The code can be written like this: dynamic dynamicDatabase = new DynamicWrapper<NorthwindDataContext>(ref database); dynamic dynamicReturnValue = dynamicDatabase.Provider.Execute(query.Expression).ReturnValue; This greatly simplified reflection. ToDynamic() and fluent reflection To make it even more straight forward, A ToDynamic() method is provided: public static class DynamicWrapperExtensions { public static dynamic ToDynamic<T>(this T value) { return new DynamicWrapper<T>(ref value); } } and a ToStatic() method is provided to unwrap the value: public class DynamicWrapper<T> : DynamicObject { public T ToStatic() { return this._value; } } In the above TryGetMember() method, please notice it does not output the member’s value, but output a wrapped member value (that is, memberValue.ToDynamic()). This is very important to make the reflection fluent. Now the code becomes: IEnumerable<Product> results = database.ToDynamic() // Here starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue .ToStatic(); // Unwraps to get the static value. With the help of TryConvert(): public class DynamicWrapper<T> : DynamicObject { public override bool TryConvert(ConvertBinder binder, out object result) { result = this._value; return true; } } ToStatic() can be omitted: IEnumerable<Product> results = database.ToDynamic() .Provider.Execute(query.Expression).ReturnValue; // Automatically converts to expected static value. Take a look at the reflection code at the beginning of this post again. Now it is much much simplified! Special scenarios In 90% of the scenarios ToDynamic() is enough. But there are some special scenarios. Access static members Using extension method ToDynamic() for accessing static members does not make sense. Instead, DynamicWrapper<T> has a parameterless constructor to handle these scenarios: public class DynamicWrapper<T> : DynamicObject { public DynamicWrapper() // For static. { this._type = typeof(T); this._isValueType = this._type.IsValueType; } } The reflection code should be like this: dynamic wrapper = new DynamicWrapper<StaticClass>(); int value = wrapper._value; int result = wrapper.PrivateMethod(); So accessing static member is also simple, and fluent of course. Change instances of value types Value type is much more complex. The main problem is, value type is copied when passing to a method as a parameter. This is why ref keyword is used for the constructor. That is, if a value type instance is passed to DynamicWrapper<T>, the instance itself will be stored in this._value of DynamicWrapper<T>. Without the ref keyword, when this._value is changed, the value type instance itself does not change. Consider FieldInfo.SetValue(). In the value type scenarios, invoking FieldInfo.SetValue(this._value, value) does not change this._value, because it changes the copy of this._value. I searched the Web and found a solution for setting the value of field: internal static class FieldInfoExtensions { internal static void SetValue<T>(this FieldInfo field, ref T obj, object value) { if (typeof(T).IsValueType) { field.SetValueDirect(__makeref(obj), value); // For value type. } else { field.SetValue(obj, value); // For reference type. } } } Here __makeref is a undocumented keyword of C#. But method invocation has problem. This is the source code of TryInvokeMember(): public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (binder == null) { throw new ArgumentNullException("binder"); } MethodInfo method = this._type.GetTypeMethod(binder.Name, args) ?? this._type.GetInterfaceMethod(binder.Name, args) ?? this._type.GetBaseMethod(binder.Name, args); if (method != null) { // Oops! // If the returnValue is a struct, it is copied to heap. object resultValue = method.Invoke(this._value, args); // And result is a wrapper of that copied struct. result = new DynamicWrapper<object>(ref resultValue); return true; } result = null; return false; } If the returned value is of value type, it will definitely copied, because MethodInfo.Invoke() does return object. If changing the value of the result, the copied struct is changed instead of the original struct. And so is the property and index accessing. They are both actually method invocation. For less confusion, setting property and index are not allowed on struct. Conclusions The DynamicWrapper<T> provides a simplified solution for reflection programming. It works for normal classes (reference types), accessing both instance and static members. In most of the scenarios, just remember to invoke ToDynamic() method, and access whatever you want: StaticType result = someValue.ToDynamic()._field.Method().Property[index]; In some special scenarios which requires changing the value of a struct (value type), this DynamicWrapper<T> does not work perfectly. Only changing struct’s field value is supported. The source code can be downloaded from here, including a few unit test code.

    Read the article

  • Zend_Feed_Reader Not supported Schema

    - by LookUp Webmaster
    Hello, I'm using Zend FW and wanted to make a feed reader. I did the following: $feed = Zend_Feed_Reader::import('feed://blog.lookup.cl/?feed=rss2'); $data = array( 'title' => $feed->getTitle(), 'link' => $feed->getLink(), 'dateModified' => $feed->getDateModified(), 'description' => $feed->getDescription(), 'language' => $feed->getLanguage(), 'entries' => array(), ); foreach ($feed as $entry) { $edata = array( 'title' => $entry->getTitle(), 'description' => $entry->getDescription(), 'dateModified' => $entry->getDateModified(), 'authors' => $entry->getAuthors(), 'link' => $entry->getLink(), 'content' => $entry->getContent() ); $data['entries'][] = $edata; } And it throws the following exception: Scheme "feed" is not supported The blog was made using Wordpress. What's wrong? If "feed it's not supported", how can I change the type of feed that Wordpress does? Thanks in advance, Take care,

    Read the article

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • Lucene .Net Searching with TermVector

    - by Ashish
    in Lucene.Net,i am creating the document for searching a word and want to display before 10 words and after 10 words.i have used TermVector. Lucene.Net.Documents.Field fldContent = new Lucene.Net.Documents.Field("content", content, Lucene.Net.Documents.Field.Store.YES, Lucene.Net.Documents.Field.Index.TOKENIZED, Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS); Can anyone help me how to find out the keyword position and extract nearest 15 words. please send some code. Thanks Ashish

    Read the article

  • Insert default value if input-text is deleted

    - by Kim Andersen
    Hi all I have the following piece of jQuery code: $(".SearchForm input:text").each(function(){ /* Sets the current value as the defaultvalue attribute */ if(allowedDefaults.indexOf($(this).val()) > 0 || $(this).val() == "") { $(this).attr("defaultvalue", $(this).val()); $(this).css("color","#9d9d9d"); /* Onfocus, if default value clear the field */ $(this).focus(function(){ if($(this).val() == $(this).attr("defaultvalue")) { $(this).val(""); $(this).css("color","#4c4c4c"); } }); /* Onblur, if empty, insert defaultvalue */ $(this).blur(function(){ alert("ud"); if($(this).val() == "") { $(this).val($(this).attr("defaultvalue")); $(this).css("color","#9d9d9d"); }else { $(this).removeClass("ignore"); } }); } }); I use this code to insert some default text into some of my input fields, when nothing else is typed in. This means that when a user sees my search-form, the defaultvalues will be set as an attribute on the input-field, and this will be the value that is shown. When a user clicks inside of the input field, the default value will be removed. When the user sees an input field at first is looks like this: <input type="text" value="" defaultvalue="From" /> This works just fine, but I have a big challenge. If a user have posted the form, and something is entered into one of the fields, then I can't show the default value in the field, if the user deletes the text from the input field. This is happening because the value of the text-field is still containing something, even when the user deletes the content. So my problem is how to show the default value when the form is submitted, and the user then removes the typed in content? When the form is submitted the input looks like this, and keeps looking like this until the form is submitted again: <input type="text" value="someValue" defaultvalue="From" /> So I need to show the default value in the input-field right after the user have deleted the content in the field, and removed the focus from the field. Does everyone understand what my problem is? Otherwise just ask, I have struggled with this one for quite some times now, so any help will be greatly appreciated. Thanks in advance, Kim Andersen

    Read the article

  • TextField focus on UIAlertView

    - by Nyxem
    Hello, I have a subclass of UIAlertView with 2 UITextField for login / password. My problem is that when I have finished to type my login and change the focus to the password field, there is a cursor on the login field and on the password field, but the focus is on the password field, so my question is, how can I make the cursor disappear on the login field ? Thanks for your help.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >