Search Results

Search found 14532 results on 582 pages for 'dynamic types'.

Page 534/582 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Access Control Service: Handling Errors

    - by Your DisplayName here!
    Another common problem with external authentication is how to deal with sign in errors. In active federation like WS-Trust there are well defined SOAP faults to communicate problem to a client. But with web applications, the error information is typically generated and displayed on the external sign in page. The relying party does not know about the error, nor can it help the user in any way. The Access Control Service allows to post sign in errors to a specified page. You setup this page in the relying party registration. That means that whenever an error occurs in ACS, the error information gets packaged up as a JSON string and posted to the page specified. This way you get structued error information back into you application so you can display a friendlier error message or log the error. I added error page support to my ACS2 sample, which can be downloaded here. How to turn the JSON error into CLR types The JSON schema is reasonably simple, the following class turns the JSON into an object: [DataContract] public class AcsErrorResponse {     [DataMember(Name = "context", Order = 1)]     public string Context { get; set; }     [DataMember(Name = "httpReturnCode", Order = 2)]     public string HttpReturnCode { get; set; }     [DataMember(Name = "identityProvider", Order = 3)]        public string IdentityProvider { get; set; }     [DataMember(Name = "timeStamp", Order = 4)]     public string TimeStamp { get; set; }     [DataMember(Name = "traceId", Order = 5)]     public string TraceId { get; set; }     [DataMember(Name = "errors", Order = 6)]     public List<AcsError> Errors { get; set; }     public static AcsErrorResponse Read(string json)     {         var serializer = new DataContractJsonSerializer( typeof(AcsErrorResponse));         var response = serializer.ReadObject( new MemoryStream(Encoding.Default.GetBytes(json))) as AcsErrorResponse;         if (response != null)         {             return response;         }         else         {             throw new ArgumentException("json");         }     } } [DataContract] public class AcsError {     [DataMember(Name = "errorCode", Order = 1)]     public string Code { get; set; }             [DataMember(Name = "errorMessage", Order = 2)]     public string Message { get; set; } } Retrieving the error information You then need to provide a page that takes the POST and deserializes the information. My sample simply fills a view that shows all information. But that’s for diagnostic/sample purposes only. You shouldn’t show the real errors to your end users. public class SignInErrorController : Controller {     [HttpPost]     public ActionResult Index()     {         var errorDetails = Request.Form["ErrorDetails"];         var response = AcsErrorResponse.Read(errorDetails);         return View("SignInError", response);     } } Also keep in mind that the error page is an anonymous page and that you are taking external input. So all the usual input validation applies.

    Read the article

  • What Counts for A DBA: Observant

    - by drsql
    When walking up to the building where I work, I can see CCTV cameras placed here and there for monitoring access to the building. We are required to wear authorization badges which could be checked at any time. Do we have enemies?  Of course! No one is 100% safe; even if your life is a fairy tale, there is always a witch with an apple waiting to snack you into a thousand years of slumber (or at least so I recollect from elementary school.) Even Little Bo Peep had to keep a wary lookout.    We nerdy types (or maybe it was just me?) generally learned on the school playground to keep an eye open for unprovoked attack from simpler, but more muscular souls, and take steps to avoid messy confrontations well in advance. After we’d apprehensively negotiated adulthood with varying degrees of success, these skills of watching for danger, and avoiding it,  translated quite well to the technical careers so many of us were destined for. And nowhere else is this talent for watching out for irrational malevolence so appropriate as in a career as a production DBA.   It isn’t always active malevolence that the DBA needs to watch out for, but the even scarier quirks of common humanity.  A large number of the issues that occur in the enterprise happen just randomly or even just one time ever in a spurious manner, like in the case where a person decided to download the entire MSDN library of software, cross join every non-indexed billion row table together, and simultaneously stream the HD feed of 5 different sporting events, making the network access slow while the corporate online sales just started. The decent DBA team, like the going, gets tough under such circumstances. They spring into action, checking all of the sources of active information, observes the issue is no longer happening now, figures that either it wasn’t the database’s fault and that the reboot of the whatever device on the network fixed the problem.  This sort of reactive support is good, and will be the initial reaction of even excellent DBAs, but it is not the end of the story if you really want to know what happened and avoid getting called again when it isn’t even your fault.   When fires start raging within the corporate software forest, the DBA’s instinct is to actively find a way to douse the flames and get back to having no one in the company have any idea who they are.  Even better for them is to find a way of killing a potential problem while the fires are small, long before they can be classified as raging. The observant DBA will have already been monitoring the server environment for months in advance.  Most troubles, such as disk space and security intrusions, can be predicted and dealt with by alerting systems, whereas other trouble can come out of the blue and requires a skill of observing ongoing conditions and noticing inexplicable changes that could signal an emerging problem.  You can’t automate the DBA, because the bankable skill of a DBA is in detecting the early signs of unexpected problems, and working out how to deal with them before anyone else notices them.    To achieve this, the DBA will check the situation as it is currently happening,  and in many cases is likely to have been the person who submitted the problem to the level 1 support person in the first place, just to let the support team know of impending issues (always well received, I tell you what!). Database and host computer settings, configurations, and even critical data might be profiled and captured for later comparisons. He’ll use Monitoring tools, built-in, commercial (Not to be too crassly commercial or anything, but there is one such tool is SQL Monitor) and lots of homebrew monitoring tools to monitor for problems and changes in the server environment.   You will know that you have it right when a support call comes in and you can look at your monitoring tools and quickly respond that “response time is well within the normal range, the query that supports the failing interface works perfectly and has actually only been called 67% as often as normal, so I am more than willing to help diagnose the problem, but it isn’t the database server’s fault and is probably a client or networking slowdown causing the interface to be used less frequently than normal.” And that is the best thing for any DBA to observe…

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • IsNumeric() Broken? Only up to a point.

    - by Phil Factor
    In SQL Server, probably the best-known 'broken' function is poor ISNUMERIC() . The documentation says 'ISNUMERIC returns 1 when the input expression evaluates to a valid numeric data type; otherwise it returns 0. ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($).'Although it will take numeric data types (No, I don't understand why either), its main use is supposed to be to test strings to make sure that you can convert them to whatever numeric datatype you are using (int, numeric, bigint, money, smallint, smallmoney, tinyint, float, decimal, or real). It wouldn't actually be of much use anyway, since each datatype has different rules. You actually need a RegEx to do a reasonably safe check. The other snag is that the IsNumeric() function  is a bit broken. SELECT ISNUMERIC(',')This cheerfully returns 1, since it believes that a comma is a currency symbol (not a thousands-separator) and you meant to say 0, in this strange currency.  However, SELECT ISNUMERIC(N'£')isn't recognized as currency.  '+' and  '-' is seen to be numeric, which is stretching it a bit. You'll see that what it allows isn't really broken except that it doesn't recognize Unicode currency symbols: It just tells you that one numeric type is likely to accept the string if you do an explicit conversion to it using the string. Both these work fine, so poor IsNumeric has to follow suit. SELECT  CAST('0E0' AS FLOAT)SELECT  CAST (',' AS MONEY) but it is harder to predict which data type will accept a '+' sign. SELECT  CAST ('+' AS money) --0.00SELECT  CAST ('+' AS INT)   --0SELECT  CAST ('+' AS numeric)/* Msg 8115, Level 16, State 6, Line 4 Arithmetic overflow error converting varchar to data type numeric.*/SELECT  CAST ('+' AS FLOAT)/*Msg 8114, Level 16, State 5, Line 5Error converting data type varchar to float.*/> So we can begin to say that the maybe IsNumeric isn't really broken, but is answering a silly question 'Is there some numeric datatype to which i can convert this string? Almost, but not quite. The bug is that it doesn't understand Unicode currency characters such as the euro or franc which are actually valid when used in the CAST function. (perhaps they're delaying fixing the euro bug just in case it isn't necessary).SELECT ISNUMERIC (N'?23.67') --0SELECT  CAST (N'?23.67' AS money) --23.67SELECT ISNUMERIC (N'£100.20') --1SELECT  CAST (N'£100.20' AS money) --100.20 Also the CAST function itself is quirky in that it cannot convert perfectly reasonable string-representations of integers into integersSELECT ISNUMERIC('200,000')       --1SELECT  CAST ('200,000' AS INT)   --0/*Msg 245, Level 16, State 1, Line 2Conversion failed when converting the varchar value '200,000' to data type int.*/  A more sensible question is 'Is this an integer or decimal number'. This cuts out a lot of the apparent quirkiness. We do this by the '+E0' trick. If we want to include floats in the check, we'll need to make it a bit more complicated. Here is a small test-rig. SELECT  PossibleNumber,         ISNUMERIC(CAST(PossibleNumber AS NVARCHAR(20)) + 'E+00') AS Hack,        ISNUMERIC (PossibleNumber + CASE WHEN PossibleNumber LIKE '%E%'                                          THEN '' ELSE 'E+00' END) AS Hackier,        ISNUMERIC(PossibleNumber) AS RawIsNumericFROM    (SELECT CAST(',' AS NVARCHAR(10)) AS PossibleNumber          UNION SELECT '£' UNION SELECT '.'         UNION SELECT '56' UNION SELECT '456.67890'         UNION SELECT '0E0' UNION SELECT '-'         UNION SELECT '-' UNION SELECT '.'         UNION  SELECT N'?' UNION SELECT N'¢'        UNION  SELECT N'?' UNION SELECT N'?34.56'         UNION SELECT '-345' UNION SELECT '3.332228E+09') AS examples Which gives the result ... PossibleNumber Hack Hackier RawIsNumeric-------------- ----------- ----------- ------------? 0 0 0- 0 0 1, 0 0 1. 0 0 1¢ 0 0 1£ 0 0 1? 0 0 0?34.56 0 0 00E0 0 1 13.332228E+09 0 1 1-345 1 1 1456.67890 1 1 156 1 1 1 I suspect that this is as far as you'll get before you abandon IsNumeric in favour of a regex. You can only get part of the way with the LIKE wildcards, because you cannot specify quantifiers. You'll need full-blown Regex strings like these ..[-+]?\b[0-9]+(\.[0-9]+)?\b #INT or REAL[-+]?\b[0-9]{1,3}\b #TINYINT[-+]?\b[0-9]{1,5}\b #SMALLINT.. but you'll get even these to fail to catch numbers out of range.So is IsNumeric() an out and out rogue function? Not really, I'd say, but then it would need a damned good lawyer.

    Read the article

  • Ninject WithConstructorArgument : No matching bindings are available, and the type is not self-bindable

    - by Jean-François Beauchamp
    My understanding of WithConstructorArgument is probably erroneous, because the following is not working: I have a service, lets call it MyService, whose constructor is taking multiple objects, and a string parameter called testEmail. For this string parameter, I added the following Ninject binding: string testEmail = "[email protected]"; kernel.Bind<IMyService>().To<MyService>().WithConstructorArgument("testEmail", testEmail); However, when executing the following line of code, I get an exception: var myService = kernel.Get<MyService>(); Here is the exception I get: Error activating string No matching bindings are available, and the type is not self-bindable. Activation path: 2) Injection of dependency string into parameter testEmail of constructor of type MyService 1) Request for MyService Suggestions: 1) Ensure that you have defined a binding for string. 2) If the binding was defined in a module, ensure that the module has been loaded into the kernel. 3) Ensure you have not accidentally created more than one kernel. 4) If you are using constructor arguments, ensure that the parameter name matches the constructors parameter name. 5) If you are using automatic module loading, ensure the search path and filters are correct. What am I doing wrong here? UPDATE: Here is the MyService constructor: [Ninject.Inject] public MyService(IMyRepository myRepository, IMyEventService myEventService, IUnitOfWork unitOfWork, ILoggingService log, IEmailService emailService, IConfigurationManager config, HttpContextBase httpContext, string testEmail) { this.myRepository = myRepository; this.myEventService = myEventService; this.unitOfWork = unitOfWork; this.log = log; this.emailService = emailService; this.config = config; this.httpContext = httpContext; this.testEmail = testEmail; } I have standard bindings for all the constructor parameter types. Only 'string' has no binding, and HttpContextBase has a binding that is a bit different: kernel.Bind<HttpContextBase>().ToMethod(context => new HttpContextWrapper(new HttpContext(new MyHttpRequest("", "", "", null, new StringWriter())))); and MyHttpRequest is defined as follows: public class MyHttpRequest : SimpleWorkerRequest { public string UserHostAddress; public string RawUrl; public MyHttpRequest(string appVirtualDir, string appPhysicalDir, string page, string query, TextWriter output) : base(appVirtualDir, appPhysicalDir, page, query, output) { this.UserHostAddress = "127.0.0.1"; this.RawUrl = null; } }

    Read the article

  • MVC - Ajax form - return partial view doesnt update in <div> target

    - by Jack
    I have an index view that I want to update automatically as the user types in a client id. I got something similiar to work (only it was updating just a label) - but this will not work. What happens is the partial is just rendered by itself (not in place of the UpdateTargetID). So the data is rendered on a new page. Here is my code: Controller: public ActionResult ClientList(string queryText) { var clients = CR.GetClientLike(queryText); return PartialView("ClientIndex", clients); } Partial View: <table> <thead> <tr> <td>Client ID</td> <td>Phone1</td> <td>Phone2</td> <td>Phone3</td> <td>Phone4</td> </tr> </thead> <tbody> <% if (Model != null) { foreach (Client c in Model) { %> <tr> <td><%= Html.Encode(c.ClientID)%></td> <td><%= Html.Encode(c.WorkPhone)%></td> <td><%= Html.Encode(c.WorkPhone1)%></td> <td><%= Html.Encode(c.WorkPhone2)%></td> <td><%= Html.Encode(c.WorkPhone3)%></td> </tr> <% } } %> </tbody> Main View: Insert code messed up, so this is just copy/pasted: $(function() { $("#queryText").keyup(function() { $('#sForm').submit(); }); }); <% using (Ajax.BeginForm("ClientList", /* new { queryText = Form.Controls[2] ?? }*/"", new AjaxOptions { UpdateTargetId = "status", InsertionMode = InsertionMode.Replace }, new { @id = "sForm" })) { % <% } % <div id="status" class="status" name="status"> <%--<% Html.RenderPartial("ClientIndex", ViewData["clients"]); %> Should this be here???? --%> </div>

    Read the article

  • Using xsl:variable in a xsl:foreach select statment

    - by Nefariousity
    I'm trying to iterate through an xml document using xsl:foreach but I need the select=" " to be dynamic so I'm using a variable as the source. Here's what I've tried: ... <xsl:template name="SetDataPath"> <xsl:param name="Type" /> <xsl:variable name="Path_1">/Rating/Path1/*</xsl:variable> <xsl:variable name="Path_2">/Rating/Path2/*</xsl:variable> <xsl:if test="$Type='1'"> <xsl:value-of select="$Path_1"/> </xsl:if> <xsl:if test="$Type='2'"> <xsl:value-of select="$Path_2"/> </xsl:if> <xsl:template> ... <!-- Set Data Path according to Type --> <xsl:variable name="DataPath"> <xsl:call-template name="SetDataPath"> <xsl:with-param name="Type" select="/Rating/Type" /> </xsl:call-template> </xsl:variable> ... <xsl:for-each select="$DataPath"> ... The foreach threw an error stating: "XslTransformException - To use a result tree fragment in a path expression, first convert it to a node-set using the msxsl:node-set() function." When I use the msxsl:node-set() function though, my results are blank. I'm aware that I'm setting $DataPath to a string, but shouldn't the node-set() function be creating a node set from it? Am I missing something? When I don't use a variable: <xsl:for-each select="/Rating/Path1/*"> I get the proper results. Here's the XML data file I'm using: <Rating> <Type>1</Type> <Path1> <sarah> <dob>1-3-86</dob> <user>Sarah</user> </sarah> <joe> <dob>11-12-85</dob> <user>Joe</user> </joe> </Path1> <Path2> <jeff> <dob>11-3-84</dob> <user>Jeff</user> </jeff> <shawn> <dob>3-5-81</dob> <user>Shawn</user> </shawn> </Path2> </Rating> My question is simple, how do you run a foreach on 2 different paths?

    Read the article

  • WPF Combobox: Autocomplete

    - by user279244
    Hi, I have implemented a Autocomplete enabled Combobox in WPF. It is like below... private void cbxSession_Loaded(object sender, RoutedEventArgs e) { cbxSession.ApplyTemplate(); TextBox textBox = cbxSession.Template.FindName("PART_EditableTextBox", cbxSession) as TextBox; textBox.IsReadOnly = false; if (textBox != null) { textBox.KeyUp += new KeyEventHandler(textBox_KeyUp); textBox.KeyUp += delegate { ///open the drop down and start filtering based on what the user types into the combobox cbxSession.IsDropDownOpen = true; cbxSession.Items.Filter += a => { if (a.ToString().ToUpper().Contains(textBox.Text.ToUpper())) return true; else return false; }; }; } } void textBox_KeyUp(object sender, KeyEventArgs e) { if ((e.Key == System.Windows.Input.Key.Up) || (e.Key == System.Windows.Input.Key.Down)) { e.Handled = true; } else if (e.Key == System.Windows.Input.Key.Enter) { e.Handled = true; cbxSession.IsDropDownOpen = false; } } void textBox_KeyDown(object sender, KeyEventArgs e) { cbxSession.SelectionChanged -= cbxSession_SelectionChanged; if (e.Key == System.Windows.Input.Key.Enter) { e.Handled = true; cbxSession.SelectionChanged += cbxSession_SelectionChanged; } if ((e.Key == System.Windows.Input.Key.Up) || (e.Key == System.Windows.Input.Key.Down)) { e.Handled = true; } } private void cbxSession_DropDownClosed(object sender, EventArgs e) { if (cbxSession.Text != "") { TextBox textBox = cbxSession.Template.FindName("PART_EditableTextBox", cbxSession) as TextBox; if (!cbxSession.Items.Contains(textBox.Text)) { textBox.Text = cbxSession.Text; } } } private void cbxSession_DropDownOpened(object sender, EventArgs e) { cbxSession.Items.Filter += a => { return true; }; } <ComboBox x:Name="cbxSession" Width="260" Canvas.Top="5" Canvas.Left="79" Height="25" Visibility="Visible" SelectionChanged="cbxSession_SelectionChanged" MaxDropDownHeight="200" IsTextSearchEnabled="False" IsEditable="True" IsReadOnly="True" Loaded="cbxSession_Loaded" DropDownClosed="cbxSession_DropDownClosed" StaysOpenOnEdit="True" DropDownOpened="cbxSession_DropDownOpened"> <ComboBox.ItemsPanel> <ItemsPanelTemplate> <VirtualizingStackPanel IsVirtualizing="True" IsItemsHost="True"/> </ItemsPanelTemplate> </ComboBox.ItemsPanel> </ComboBox> But, the problem I am facing is... When I try searching, the first character goes missing. And this happens only once. Secondly, When I am using Arrow buttons to the filtered items, the ComboboxSelectionChanged event is fired. Is there any way to make it fire only on the click of 'Enter'

    Read the article

  • jqGrid multi-checkbox custom edittype solution

    - by gsiler
    For those of you trying to understand jqGrid custom edit types ... I created a multi-checkbox form element, and thought I'd share. This was built using version 3.6.4. If anyone has a more efficient solution, please pass it on. Within the colModel, the appropriate edit fields look like this: edittype:'custom' editoptions:{ custom_element:MultiCheckElem, custom_value:MultiCheckVal, list:'Check1,Check2,Check3,Check4' } Here are the javascript functions (BTW, It also works – with some modifications – when the list of checkboxes is in a DIV block): //———————————————————— // Description: // MultiCheckElem is the "custom_element" function that builds the custom multiple check box input // element. From what I have gathered, jqGrid calls this the first time the form is launched. After // that, only the "custom_value" function is called. // // The full list of checkboxes is in the jqGrid "editoptions" section "list" tag (in the options // parameter). //———————————————————— function MultiCheckElem( value, options ) { //———- // for each checkbox in the list // build the input element // set the initial "checked" status // endfor //———- var ctl = ''; var ckboxAry = options.list.split(','); for ( var i in ckboxAry ) { var item = ckboxAry[i]; ctl += '<input type="checkbox" '; if ( value.indexOf(item + '|') != -1 ) ctl += 'checked="checked" '; ctl += 'value="' + item + '"> ' + item + '</input><br />&nbsp;'; } ctl = ctl.replace( /<br />&nbsp;$/, '' ); return ctl; } //———————————————————— // Description: // MultiCheckVal is the "custom_value" function for the custom multiple check box input element. It // appears that jqGrid invokes this function the first time the form is submitted and, the rest of // the time, when the form is launched (action = set) and when it is submitted (action = 'get'). //———————————————————— function MultiCheckVal(elem, action, val) { var items = ''; if (action == 'get') // the form has been submitted { //———- // for each input element // if it's checked, add it to the list of items // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT' && elem[i].checked ) items += elem[i].value + ','; } // items contains a comma delimited list that is returned as the result of the element items = items.replace(/,$/, ''); } else // the form is launched { //———- // for each input element // based on the input value, set the checked status // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT') { if (val.indexOf(elem[i].value + '|') == -1) elem[i].checked = false; else elem[i].checked = true; } } // endfor } return items; }

    Read the article

  • "No message serializer has been configured" error when starting NServiceBus endpoint

    - by SteveBering
    My GenericHost hosted service is failing to start with the following message: 2010-05-07 09:13:47,406 [1] FATAL NServiceBus.Host.Internal.GenericHost [(null)] <(null) - System.InvalidOperationException: No message serializer has been con figured. at NServiceBus.Unicast.Transport.Msmq.MsmqTransport.CheckConfiguration() in d:\BuildAgent-02\work\672d81652eaca4e1\src\impl\unicast\NServiceBus.Unicast.Msmq\ MsmqTransport.cs:line 241 at NServiceBus.Unicast.Transport.Msmq.MsmqTransport.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\impl\unicast\NServiceBus.Unicast.Msmq\MsmqTransport .cs:line 211 at NServiceBus.Unicast.UnicastBus.NServiceBus.IStartableBus.Start(Action startupAction) in d:\BuildAgent-02\work\672d81652eaca4e1\src\unicast\NServiceBus.Uni cast\UnicastBus.cs:line 694 at NServiceBus.Unicast.UnicastBus.NServiceBus.IStartableBus.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\unicast\NServiceBus.Unicast\UnicastBus.cs:l ine 665 at NServiceBus.Host.Internal.GenericHost.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\host\NServiceBus.Host\Internal\GenericHost.cs:line 77 My endpoint configuration looks like: public class ServiceEndpointConfiguration : IConfigureThisEndpoint, AsA_Publisher, IWantCustomInitialization { public void Init() { // build out persistence infrastructure var sessionFactory = Bootstrapper.InitializePersistence(); // configure NServiceBus infrastructure var container = Bootstrapper.BuildDependencies(sessionFactory); // set up logging log4net.Config.XmlConfigurator.Configure(); Configure.With() .Log4Net() .UnityBuilder(container) .XmlSerializer(); } } And my app.config looks like: <configSections> <section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core" /> <section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" /> <section name="Logging" type="NServiceBus.Config.Logging, NServiceBus.Core" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" requirePermission="false" /> </configSections> <Logging Threshold="DEBUG" /> <MsmqTransportConfig InputQueue="NServiceBus.ServiceInput" ErrorQueue="NServiceBus.Errors" NumberOfWorkerThreads="1" MaxRetries="2" /> <UnicastBusConfig DistributorControlAddress="" DistributorDataAddress="" ForwardReceivedMessagesTo="NServiceBus.Auditing"> <MessageEndpointMappings> <!-- publishers don't need to set this for their own message types --> </MessageEndpointMappings> </UnicastBusConfig> <connectionStrings> <add name="Db" connectionString="Data Source=..." providerName="System.Data.SqlClient" /> </connectionStrings> <log4net debug="true"> <root> <level value="INFO"/> </root> <logger name="NHibernate"> <level value="ERROR" /> </logger> </log4net> This has worked in the past, but seems to be failing when the generic host starts. My endpoint configuration is below, along with the app.config for the service. What is strange is that in my endpoint configuration, I am specifying to use the XmlSerializer for message serialization. I don't see any other errors in the console output preceding the error message. What am I missing? Thanks, Steve

    Read the article

  • How can I use WCF with only basichttpbinding, SSL and Basic Authentication in IIS?

    - by Tim
    Hello, Is it possible to setup a WCF service with SSL and Basic Authentication in IIS using only BasicHttpBinding-binding? (I can’t use the wsHttpBinding-binding) The site is hosted on IIS 7, with the following authentication set up: - Anonymous access: off - Basic authentication: on - Integrated Windows authentication: off !! Service Config: <services> <service name="NameSpace.SomeService"> <host> <baseAddresses> <add baseAddress="https://hostname/SomeService/" /> </baseAddresses> </host> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" bindingNamespace="http://hostname/SomeMethodName/1" contract="NameSpace.ISomeInterfaceService" name="Default" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> <exceptionShielding/> </behavior> </serviceBehaviors> </behaviors> I tried 2 types of bindings with two different errors: 1 - IIS Error: 'Could not find a base address that matches scheme http for the endpoint with binding BasicHttpBinding. Registered base address schemes are [https]. <bindings> <basicHttpBinding> <binding> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> 2 - IIS Error: Security settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service. <bindings> <basicHttpBinding> <binding> <security mode="Transport"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> Does somebody know how to configure this correctly? (if possible?)

    Read the article

  • WPF Styles and Tooltips Question

    - by A.R.
    I have a style that I am using to make dynamic tooltips on certain text boxes like so. <Style TargetType="{x:Type TextBox}"> <Setter Property="MinWidth" Value="100"/> <Style.Triggers> <Trigger Property="Validation.HasError" Value="True"> <!-- item of interest --> <Setter Property="ToolTip"> <Setter.Value> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="Tag"/> </MultiBinding> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> This works very well, but if I want to use a more complex tooltip I can't figure out how to bind to 'Tag' anymore for the converter value. For example; ... <Setter Property="ToolTip"> <Setter.Value> <StackPanel> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <!-- item of interest --> <Binding RelativeSource=" what goes here?? "/> </MultiBinding> </TextBlock.Text> </TextBlock> <Image/> </StackPanel> </Setter.Value> </Setter> ... I have tried several flavors of 'FindAncestor' and what not for the relative source, but I can't get anything to work. Any ideas?? UPDATE: 12-29-2010 : Here is the correct code, answer provided by our friend Goblin below. Works perfectly! ... <Setter Property="ToolTip"> <Setter.Value> <!-- Item of interest --> <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}"> <StackPanel> <Image/> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding Path="Tag"/> </MultiBinding> </TextBlock.Text> </TextBlock> </StackPanel> </ToolTip> </Setter.Value> </Setter> ...

    Read the article

  • IdHttp Post Method Delphi 2010

    - by Dusten S
    Like others before me, I'm having troubles using the IdHttp(Indy 10.5.5) component in Delphi 2010. The code works fine in Delphi 7: var XMLString : AnsiString; lService : AnsiString; ResponseStream: TMemoryStream; InputStringList : TStringList; begin ResponseStream := TMemoryStream.Create; InputStringList := TStringList.Create; XMLString :='<?xml version="1.0" encoding="ISO-8859-1"?> '+ '<!DOCTYPE pnet_imessage_send PUBLIC "-//PeopleNet//pnet_imessage_send" "http://open.peoplenetonline.com/dtd/pnet_imessage_send.dtd"> '+ '<pnet_imessage_send> '+ ' <cid>username</cid> '+ ' <pw>password</pw> '+ ' <vehicle_number>tr01</vehicle_number> '+ ' <deliver>now</deliver> '+ ' <action> '+ ' <action_type>reply_with_freeform</action_type> '+ ' <urgent_reply>yes</urgent_reply> '+ ' </action> '+ ' <freeform_message>Test Message Version 2</freeform_message> '+ '</pnet_imessage_send> '; lService := 'imessage_send'; InputStringList.Values['service'] := lService; InputStringList.Values['xml'] := XMLString; try IdHttp1.Request.Accept := '*/*'; IdHttp1.Request.ContentType := 'text/XML'; IdHTTP1.Post('http://open.peoplenetonline.com/scripts/open.dll', InputStringList, ResponseStream); ... finally ResponseStream.Free; InputStringList.Free; end; The only differences so far between this and the D7 code is that I've changed the String types to AnsiString, and added the HTTP Request properties. The response I get back from the server is 'XML failed to parse. Whitespace expected at Line:1 Position: 19', I'm assuming the XML got garbled up somewhere in the process, but I can't figure our where I'm going wrong. Any ideas?

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • How can you add a UIGestureRecognizer to a UIBarButtonItem as in the common undo/redo UIPopoverContr

    - by SG
    Problem In my iPad app, I cannot attach a popover to a button bar item only after press-and-hold events. But this seems to be standard for undo/redo. How do other apps do this? Background I have an undo button (UIBarButtonSystemItemUndo) in the toolbar of my UIKit (iPad) app. When I press the undo button, it fires it's action which is undo:, and that executes correctly. However, the "standard UE convention" for undo/redo on iPad is that pressing undo executes an undo but pressing and holding the button reveals a popover controller where the user selected either "undo" or "redo" until the controller is dismissed. The normal way to attach a popover controller is with presentPopoverFromBarButtonItem:, and I can configure this easily enough. To get this to show only after press-and-hold we have to set a view to respond to "long press" gesture events as in this snippet: UILongPressGestureRecognizer *longPressOnUndoGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleLongPressOnUndoGesture:)]; //Broken because there is no customView in a UIBarButtonSystemItemUndo item [self.undoButtonItem.customView addGestureRecognizer:longPressOnUndoGesture]; [longPressOnUndoGesture release]; With this, after a press-and-hold on the view the method handleLongPressOnUndoGesture: will get called, and within this method I will configure and display the popover for undo/redo. So far, so good. The problem with this is that there is no view to attach to. self.undoButtonItem is a UIButtonBarItem, not a view. Possible solutions 1) [The ideal] Attach the gesture recognizer to the button bar item. It is possible to attach a gesture recognizer to a view, but UIButtonBarItem is not a view. It does have a property for .customView, but that property is nil when the buttonbaritem is a standard system type (in this case it is). 2) Use another view. I could use the UIToolbar but that would require some weird hit-testing and be an all around hack, if even possible in the first place. There is no other alternative view to use that I can think of. 3) Use the customView property. Standard types like UIBarButtonSystemItemUndo have no customView (it is nil). Setting the customView will erase the standard contents which it needs to have. This would amount to re-implementing all the look and function of UIBarButtonSystemItemUndo, again if even possible to do. Question How can I attach a gesture recognizer to this "button"? More specifically, how can I implement the standard press-and-hold-to-show-redo-popover in an iPad app? Ideas? Thank you very much, especially if someone actually has this working in their app (I'm thinking of you, omni) and wants to share...

    Read the article

  • Django: What's an awesome plugin to maintain images in the admin?

    - by meder
    I have an articles entry model and I have an excerpt and description field. If a user wants to post an image then I have a separate ImageField which has the default standard file browser. I've tried using django-filebrowser but I don't like the fact that it requires django-grappelli nor do I necessarily want a flash upload utility - can anyone recommend a tool where I can manage image uploads, and basically replace the file browse provided by django with an imagepicking browser? In the future I'd probably want it to handle image resizing and specify default image sizes for certain article types. Edit: I'm trying out adminfiles now but I'm having issues installing it. I grabbed it and added it to my python path, added it to INSTALLED_APPS, created the databases for it, uploaded an image. I followed the instructions to modify my Model to specify adminfiles_fields and registered but it's not applying in my admin, here's my admin.py for articles: from django.contrib import admin from django import forms from articles.models import Category, Entry from tinymce.widgets import TinyMCE from adminfiles.admin import FilePickerAdmin class EntryForm( forms.ModelForm ): class Media: js = ['/media/tinymce/tiny_mce.js', '/media/tinymce/load.js']#, '/media/admin/filebrowser/js/TinyMCEAdmin.js'] class Meta: model = Entry class CategoryAdmin(admin.ModelAdmin): prepopulated_fields = { 'slug': ['title'] } class EntryAdmin( FilePickerAdmin ): adminfiles_fields = ('excerpt',) prepopulated_fields = { 'slug': ['title'] } form = EntryForm admin.site.register( Category, CategoryAdmin ) admin.site.register( Entry, EntryAdmin ) Here's my Entry model: class Entry( models.Model ): LIVE_STATUS = 1 DRAFT_STATUS = 2 HIDDEN_STATUS = 3 STATUS_CHOICES = ( ( LIVE_STATUS, 'Live' ), ( DRAFT_STATUS, 'Draft' ), ( HIDDEN_STATUS, 'Hidden' ), ) status = models.IntegerField( choices=STATUS_CHOICES, default=LIVE_STATUS ) tags = TagField() categories = models.ManyToManyField( Category ) title = models.CharField( max_length=250 ) excerpt = models.TextField( blank=True ) excerpt_html = models.TextField(editable=False, blank=True) body_html = models.TextField( editable=False, blank=True ) article_image = models.ImageField(blank=True, upload_to='upload') body = models.TextField() enable_comments = models.BooleanField(default=True) pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique_for_date='pub_date') author = models.ForeignKey(User) featured = models.BooleanField(default=False) def save( self, force_insert=False, force_update= False): self.body_html = markdown(self.body) if self.excerpt: self.excerpt_html = markdown( self.excerpt ) super( Entry, self ).save( force_insert, force_update ) class Meta: ordering = ['-pub_date'] verbose_name_plural = "Entries" def __unicode__(self): return self.title Edit #2: To clarify I did move the media files to my media path and they are indeed rendering the image area, I can upload fine, the <<<image>>> tag is inserted into my editable MarkItUp w/ Markdown area but it isn't rendering in the MarkItUp preview - perhaps I just need to apply the |upload_tags into that preview. I'll try adding it to my template which posts the article as well.

    Read the article

  • how to Use JAXWS/JAXB rename the parameter

    - by shrimpy
    I use CXF(2.2.3) to compile the Amazon Web Service WSDL (http://s3.amazonaws.com/ec2-downloads/2009-07-15.ec2.wsdl) But got error as below. Parameter: snapshotSet already exists for method describeSnapshots but of type com.amazonaws.ec2.doc._2009_07_15.DescribeSnapshotsSetType instead of com.amazonaws.ec2.doc._2009_07_15.DescribeSnapshotsSetResponseType. Use a JAXWS/JAXB binding customization to rename the parameter. The conflict was due to the data type show below: <xs:complexType name="DescribeSnapshotsType"> <xs:sequence> <xs:element name="snapshotSet" type="tns:DescribeSnapshotsSetType"/> </xs:sequence> </xs:complexType> <xs:complexType name="DescribeSnapshotsResponseType"> <xs:sequence> <xs:element name="requestId" type="xs:string"/> <xs:element name="snapshotSet" type="tns:DescribeSnapshotsSetResponseType"/> </xs:sequence> </xs:complexType> I create a binding file try to address the issue...but it didn`t do the job <jaxws:bindings xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" wsdlLocation="EC2_2009-07-15.wsdl" xmlns="http://java.sun.com/xml/ns/jaxws" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:jaxws="http://java.sun.com/xml/ns/jaxws"> <enableWrapperStyle>false</enableWrapperStyle> <jaxws:bindings node="wsdl:definitions/wsdl:types/xs:schema[@targetNamespace='http://ec2.amazonaws.com/doc/2009-07-15/']"> <jxb:bindings node="xs:complexType[@name='tns:DescribeSnapshotsType']//xs:element[@name='snapshotSet']"> <jxb:property name="snapshotRequestSet"/> </jxb:bindings> <jxb:bindings node="xs:complexType[@name='DescribeSnapshotsResponseType']//xs:element[@name='snapshotSet']"> <jxb:property name="snapshotResponseSet"/> </jxb:bindings> </jaxws:bindings> </jaxws:bindings> And the command i used, was like below <wsdlOptions> <wsdlOption> <wsdl>${basedir}/src/main/resources/wsdl/EC2_2009-07-15.wsdl</wsdl> <extraargs> <extraarg>-b</extraarg> <extraarg>${basedir}/src/main/resources/wsdl/Bindings_EC2_2009-07-15.xml</extraarg> </extraargs> </wsdlOption> </wsdlOptions> What is wrong with my code???? And you can check out my project by using svn.... svn co http://shrimpysprojects.googlecode.com/svn/trunk/smartcrc/AWSAgent/

    Read the article

  • Delphi TBytesField - How to see the text properly - Source is HIT OLEDB AS400

    - by myitanalyst
    We are connecting to a multi-member AS400 iSeries table via HIT OLEDB and HIT ODBC. You connect to this table via an alias to access a specific multi-member. We create the alias on the AS400 this way: CREATE ALIAS aliasname FOR table(membername) We can then query each member of the table this way: SELECT * FROM aliasname We are testing this in Delphi6 first, but will move it to D2010 later We are using HIT OLEDB for the AS400. We are pulling down records from a table and the field is being seen as a tBytesField. I have also tried ODBC driver and it sees as tBytesField as well. Directly on the AS400 I can query the data and see readable text. I can use the iSeries Navigation tool and see readable text as well. However when I bring it down to the Delphi client via the HIT OLEDB or HIT ODBC and try to view via asString then I just see unreadable text.. something like this: ñðð@ðõñððððñ÷@õôððõñòøóóöøñðÂÁÕÒ@ÖÆ@ÁÔÅÙÉÃÁ@@@@@@@@ÂÈÙÉâãæÁðòñè@ÔK@k@ÉÕÃK@@@@@@@@@ç I jumbled up the text above, but that is the character types that show up. When I did a test in D2010 the text looks like japanse or chinese characters, but if I display as AnsiString then it looks like what it does in Delphi 6. I am thinking this may have something to do with code pages or character sets, but I have no experience in this are so it is new to me if it is related. When I look at the Coded Character Set on the AS400 it is set to 65535. What do I need to do to make this text readable? We do have a third party component (Delphi400) that makes things behave in a more native AS400 manner. When I use its AS400 connection and AS400 query components it shows the field as a tStringField and displays just fine. BUT we are phasing out this product (for a number of reasons) and would really like the OLEDB with the ADO components work. Just for clarification the HIT OLEDB with tADOQuery do have some fields showing as tStringFields for many of the other tables we use... not sure why it is showing as a tBytesField in this case. I am not an AS400 expert, but looking at the field definititions on the AS400 the ones showing up as tBytesField look the same as the ones showing up as tStringFields... but there must be a difference. Maybe due to being a multi-member? So... does anyone have any guidance on how to get the correct string data that is readable? If you need more info please ask. Greg

    Read the article

  • VS2010 Assembly Load Error

    - by Nate
    I am getting the following error when I try to build an ASP.NET 4 project in Visual Studio 2010: "Could not load file or assembly 'file:///C:\Dev\project\trunk\bin\Elmah.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)". I have verified that the dll does, in fact, exist, and is getting copied to the bin folder correctly. I have also tried removing and then re-adding the reference to the project. The build only fails when I switch the Solution Configuration to "Release". It does not fail when the Solution Configuration is set to "Debug". The only difference between the two configurations (that I know of) is shown in the following Web.config transform, Web.Release.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="SqlServer" connectionString="" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors mode="On" xdt:Transform="Replace"> <error statusCode="404" redirect="lost.htm" /> <error statusCode="500" redirect="uhoh.htm" /> </customErrors> </system.web> </configuration> I have tried using Fusion Log Viewer to track down the assembly binding issue, but it looks like it is finding and loading the assembly correctly. Here is the log: *** Assembly Binder Log Entry (6/8/2010 @ 10:01:54 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll Running under executable c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools\sgen.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = User LOG: Where-ref bind. Location = C:\Dev\project\trunk\bin\Elmah.dll LOG: Appbase = file:///c:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/bin/NETFX 4.0 Tools/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = sgen.exe Calling assembly : (Unknown). === LOG: This bind starts in LoadFrom load context. WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load(). LOG: No application configuration file found. LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config. LOG: Attempting download of new URL file:///C:/Dev/project/trunk/bin/Elmah.dll. LOG: Assembly download was successful. Attempting setup of file: C:\Dev\project\trunk\bin\Elmah.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: Elmah, Version=1.1.11517.0, Culture=neutral, PublicKeyToken=null LOG: Re-apply policy for where-ref bind. LOG: Where-ref bind Codebase does not match what is found in default context. Keep the result in LoadFrom context. LOG: Binding succeeds. Returns assembly from C:\Dev\project\trunk\bin\Elmah.dll. LOG: Assembly is loaded in LoadFrom load context. I feel like there is a fundamental lack of understanding on my part as to what exactly is going on here. Any explanation/help is much appreciated!

    Read the article

  • .NET Interface with AutoCAD -- SetXData errors.

    - by Jerry
    I am trying to use the SetXData method on the AutoCAD 2007 COM object, but it is throwing errors. Example Test: public AcadEntity getAcadEntity() { /// ... Basic code to return a single AutoCAD entity... } private void btnTagItem_Click(object sender, EventArgs e) { AcadEntity ent = getAcadEntity(); short[] xDataType; string[] xDataStrings; DrawingXData xData = new DrawingXData(); xData.field1 = "Some Text Goes here"; xData.field2 = 1; xData.field3 = 100; xData.field4 = 1509.2; xData.field5 = "More Text"; BuildXData("AutoCad_App_Name", xData, out xDataType, out xDataStrings); ent.SetXData(xDataType, xDataStrings); // This line crashes. } private void BuildXData(string applicationName, DrawingXData xData, out short[] xDataType, out string[] xDataStrings) { List<short> dataTypes = new List<short>(); List<string> dataStrings = new List<string>(); /// Code types... /// 1000 == String up to 255 bytes /// 1001 == Application Name // Set Applicaiton Name dataTypes.Add(1001); dataStrings.Add(applicationName); // Set Application Data dataTypes.Add(1000); dataStrings.Add(xData.field1.ToString()); dataTypes.Add(1000); dataStrings.Add(xData.field2.ToString()); dataTypes.Add(1000); dataStrings.Add(xData.field3.ToString()); dataTypes.Add(1000); dataStrings.Add(xData.field4.ToString()); dataTypes.Add(1000); dataStrings.Add(xData.field5.ToString()); // ... etc. xDataType = dataTypes.ToArray(); xDataStrings = dataStrings.ToArray(); } The error I get is "Invalid argument data in SetXData method". The error code (if this helps anyone) is -2145320939. The main reason I'm posting is because the same code in a very old VB6 application works just fine. I'm stumped.

    Read the article

  • Facebook require_login() in iFrame App

    - by LapKom
    Hi, I have serious problem with iframe application. I need to use many external JS libraries and other dynamic stuuf so FMBL application can't be done. When I call require_login() I get applicaition installing dialog when app is not already installed, which is ok. But then after authorization application enters an endless redirect loop with parameters like auth_token, installed and so. Yesterday I managed to fix this, but today it's broken again... What the heck is happening with FB? It's driving me crazy to find a sollution, none of ones found on net doesn't seem to be working. So far I tried: http://abhirama.wordpress.com/2010/03/07/facebook-iframe-xfbml-app/ (7th march 2010!) http://forum.developers.facebook.com/viewtopic.php?pid=156092 http://www.keywordintellect.com/facebook-development/how-to-set-up-a-facebook-iframe-application-in-php-in-5-minutes/ http://www.markdeepwell.com/2010/02/validating-a-facebook-session-within-an-iframe/ http://forum.developers.facebook.com/viewtopic.php?pid=210449 http://www.ajaxlines.com/ajax/stuff/article/facebook_fbml_rendering_in_iframe_application.php http://www.aratide.com/php/solving-the-break-out-issue-in-iframe-facebook-applications/ None of the above worked... According to those and some FB docs: http://wiki.developers.facebook.com/index.php/FB_RequireFeatures http://wiki.developers.facebook.com/index.php/Cross_Domain_Communication_Channel My example test files look as follow: <?php //Link in library. require_once '../application/vendor/Facebook/facebook.php'; //Authentication Keys $appapikey = 'XXXX'; $appsecret = 'XXXX'; //Construct the class $facebook = new Facebook($appapikey, $appsecret); //Require login $user_id = $facebook->require_login(); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <title></title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> This is you: <fb:name uid="<?php echo $user_id?>"></fb:name> <?php var_dump($facebook->$this->facebook->api_client->friends_get())?> <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function(){ FB.Facebook.init("<?=$appapikey?>", "xd_receiver.html"); }); </script> </body> </html> And cross-domain file xd_receiver.html is: <!doctype html public "-//w3c//dtd xhtml 1.0 strict//en" "http://www.w3.org/tr/xhtml1/dtd/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>cross-domain receiver page</title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/XdCommReceiver.js" type="text/javascript"></script> </body> </html> How do I get it working? I'm using Kohana framework to do this and already replaced header('Location') with url::redirect() in facebook php library.

    Read the article

  • sqlite3-ruby can't make on rvm 1.8.7

    - by Josh Crews
    Upgrading to Rails 3 by starting with RVM 1.8.7. OSX 10.5.8 Output: josh-crewss-macbook:~ joshcrews$ gem install sqlite3-rubyBuilding native extensions. This could take a while...ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/bin/ruby extconf.rb checking for sqlite3.h... yes checking for sqlite3_libversion_number() in -lsqlite3... yes checking for rb_proc_arity()... no checking for sqlite3_column_database_name()... no checking for sqlite3_enable_load_extension()... no checking for sqlite3_load_extension()... no creating Makefile make gcc -I. -I. -I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0 -I. -I/usr/local/include -I/opt/local/include -I/usr/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -g -O2 -fno-common -pipe -fno-common -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -c database.c database.c: In function ‘deallocate’: database.c:17: warning: implicit declaration of function ‘sqlite3_next_stmt’ database.c:17: warning: assignment makes pointer from integer without a cast database.c: In function ‘initialize’: database.c:76: warning: implicit declaration of function ‘sqlite3_open_v2’ database.c:79: error: ‘SQLITE_OPEN_READWRITE’ undeclared (first use in this function) database.c:79: error: (Each undeclared identifier is reported only once database.c:79: error: for each function it appears in.) database.c:79: error: ‘SQLITE_OPEN_CREATE’ undeclared (first use in this function) database.c: In function ‘set_sqlite3_func_result’: database.c:277: error: ‘sqlite3_int64’ undeclared (first use in this function) database.c: In function ‘rb_sqlite3_func’: database.c:311: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype database.c: In function ‘rb_sqlite3_step’: database.c:378: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype make: *** [database.o] Error 1 Gem list (these are under RVM, under system I've got lot more gems included the sqlite3-ruby that's worked for 1.5 years) josh-crewss-macbook:~ joshcrews$ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta3) actionpack (3.0.0.beta3) activemodel (3.0.0.beta3) activerecord (3.0.0.beta3) activeresource (3.0.0.beta3) activesupport (3.0.0.beta3, 2.3.8) arel (0.3.3) builder (2.1.2) bundler (0.9.25) capybara (0.3.8) configuration (1.1.0) cucumber (0.7.2) cucumber-rails (0.3.1) culerity (0.2.10) database_cleaner (0.5.2) diff-lcs (1.1.2) erubis (2.6.5) ffi (0.6.3) gherkin (1.0.30) i18n (0.4.0, 0.3.7) json_pure (1.4.3) launchy (0.3.5) mail (2.2.1) memcache-client (1.8.3) mime-types (1.16) nokogiri (1.4.2) polyglot (0.3.1) rack (1.1.0) rack-mount (0.6.3) rack-test (0.5.4) rails (3.0.0.beta3) railties (3.0.0.beta3) rake (0.8.7) rdoc (2.5.8) rspec (2.0.0.beta.10, 2.0.0.beta.8) rspec-core (2.0.0.beta.10, 2.0.0.beta.8) rspec-expectations (2.0.0.beta.10, 2.0.0.beta.8) rspec-mocks (2.0.0.beta.10, 2.0.0.beta.8) rspec-rails (2.0.0.beta.10, 2.0.0.beta.8) rubygems-update (1.3.7) selenium-webdriver (0.0.20) spork (0.8.3) term-ansicolor (1.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.6) treetop (1.4.8) trollop (1.16.2) tzinfo (0.3.22) webrat (0.7.1) Version of XCode: 3.1.1 My suspicion is it has to do with "-I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0", because i686-darwin9.8.0 doesnt exist in that file

    Read the article

  • Entity Framework won't SaveChanges on new entity with two-level relationship

    - by Tim Rourke
    I'm building an ASP.NET MVC site using the ADO.NET Entity Framework. I have an entity model that includes these entities, associated by foreign keys: Report(ID, Date, Heading, Report_Type_ID, etc.) SubReport(ID, ReportText, etc.) - one-to-one relationship with Report. ReportSource(ID, Name, Description) - one-to-many relationship with Sub_Report. ReportSourceType(ID, Name, Description) - one-to-many relationship with ReportSource. Contact (ID, Name, Address, etc.) - one-to-one relationship with Report_Source. There is a Create.aspx page for each type of SubReport. The post event method returns a new Sub_Report entity. Before, in my post method, I followed this process: Set the properties for a new Report entity from the page's fields. Set the SubReport entity's specific properties from the page's fields. Set the SubReport entity's Report to the new Report entity created in 1. Given an ID provided by the page, look up the ReportSource and set the Sub_Report entity's ReportSource to the found entity. SaveChanges. This workflow succeeded just fine for a couple of weeks. Then last week something changed and it doesn't work any more. Now instead of the save operation, I get this Exception: UpdateException: "Entities in 'DIR2_5Entities.ReportSourceSet' participate in the 'FK_ReportSources_ReportSourceTypes' relationship. 0 related 'ReportSourceTypes' were found. 1 'Report_Source_Types' is expected." The debug visualizer shows the following: The SubReport's ReportSource is set and loaded, and all of its properties are correct. The Report_Source has a valid ReportSourceType entity attached. In SQL Profiler the prepared SQL statement looks OK. Can anybody point me to what obvious thing I'm missing? TIA Notes: The Report and SubReport are always new entities in this case. The Report entity contains properties common to many types of reports and is used for generic queries. SubReports are specific reports with extra parameters varying by type. There is actually a different entity set for each type of SubReport, but this question applies to all of them, so I use SubReport as a simplified example.

    Read the article

  • Can/should one disable namespace validation in Axis2 clients using ADB databinding?

    - by RJCantrell
    I have an document/literal Axis 1 service which I'm consuming with an Axis 2 client (ADB databinding, generated from WSDL2Java). It receives a valid XML response, but in parsing that XML, I get the error "Unexpected Subelement" for any type which doesn't have a namespace defined in the response. I can resolve the error by manually changing the (auto-generated by Axis2) type-validation line to only check the type name and not the namespace as well, but is there a non-manual way to skip this? This service is consumed properly by other Axis 1 clients. Here's the response, with bits in ALL CAPS having been anonymized. Some types have namespaces, some have empty-string namespaces, and some have none at all. Can I control this via the service's WSDL, or in Axis2's WSDL2Java handling of the WSDL? Is there a fundamental mismatch between Axis 1 and Axis2? The response below looks correct, except perhaps for where that type (anonymized as TWO below) is nested within itself. <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <SERVICENAMEResponse xmlns="http://service.PROJECT.COMPANY.com"> <SERVICENAMEReturn xmlns=""> <ONE></ONE> <TWO><TWO> <THREE>2</THREE> <FOUR>-10</FOUR> <FIVE>6</FIVE> <SIX>1</SIX> </TWO></TWO> <fileName></fileName> </SERVICENAMEReturn></SERVICENAMEResponse> </soapenv:Body></soapenv:Envelope> I did not have to modify the validation of the "SERVICENAMEResponse" object because it has the correct namespace specified, but I have to so for all its subelements. The manual modification that I can make (one per type) to fix this is to change: if (reader.isStartElement() && new javax.xml.namespace.QName("http://dto.PROJECT.COMPANY.com","ONE").equals(reader.getName())){ to: if (reader.isStartElement() && new javax.xml.namespace.QName("ONE").equals(reader.getName())){

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >