Search Results

Search found 2661 results on 107 pages for 'implict conversion'.

Page 87/107 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Need WIF Training?

    - by Your DisplayName here!
    I spend numerous hours every month answering questions about WIF and identity in general. This made me realize that this is still quite a complicated topic once you go beyond the standard fedutil stuff. My good friend Brock and I put together a two day training course about WIF that covers everything we think is important. The course includes extensive lab material where you take standard application and apply all kinds of claims and federation techniques and technologies like WS-Federation, WS-Trust, session management, delegation, home realm discovery, multiple identity providers, Access Control Service, REST, SWT and OAuth. The lab also includes the latest version of the thinktecture identityserver and you will learn how to use and customize it. If you are looking for an open enrollment style of training, have a look here. Or contact me directly! The course outline looks as follows: Day 1 Intro to Claims-based Identity & the Windows Identity Foundation WIF introduces important concepts like conversion of security tokens and credentials to claims, claims transformation and claims-based authorization. In this module you will learn the basics of the WIF programming model and how WIF integrates into existing .NET code. Externalizing Authentication for Web Applications WIF includes support for the WS-Federation protocol. This protocol allows separating business and authentication logic into separate (distributed) applications. The authentication part is called identity provider or in more general terms - a security token service. This module looks at this scenario both from an application and identity provider point of view and walks you through the necessary concepts to centralize application login logic both using a standard product like Active Directory Federation Services as well as a custom token service using WIF’s API support. Externalizing Authentication for SOAP Services One big benefit of WIF is that it unifies the security programming model for ASP.NET and WCF. In the spirit of the preceding modules, we will have a look at how WIF integrates into the (SOAP) web service world. You will learn how to separate authentication into a separate service using the WS-Trust protocol and how WIF can simplify the WCF security model and extensibility API. Day 2 Advanced Topics:  Security Token Service Architecture, Delegation and Federation The preceding modules covered the 80/20 cases of WIF in combination with ASP.NET and WCF. In many scenarios this is just the tip of the iceberg. Especially when two business partners decide to federate, you usually have to deal with multiple token services and their implications in application design. Identity delegation is a feature that allows transporting the client identity over a chain of service invocations to make authorization decisions over multiple hops. In addition you will learn about the principal architecture of a STS, how to customize the one that comes with this training course, as well as how to build your own. Outsourcing Authentication:  Windows Azure & the Azure AppFabric Access Control Service Microsoft provides a multi-tenant security token service as part of the Azure platform cloud offering. This is an interesting product because it allows to outsource vital infrastructure services to a managed environment that guarantees uptime and scalability. Another advantage of the Access Control Service is, that it allows easy integration of both the “enterprise” protocols like WS-* as well as “web identities” like LiveID, Google or Facebook into your applications. ACS acts as a protocol bridge in this case where the application developer doesn’t need to implement all these protocols, but simply uses a service to make it happen. Claims & Federation for the Web and Mobile World Also the web & mobile world moves to a token and claims-based model. While the mechanics are almost identical, other protocols and token types are used to achieve better HTTP (REST) and JavaScript integration for in-browser applications and small footprint devices. Also patterns like how to allow third party applications to work with your data without having to disclose your credentials are important concepts in these application types. The nice thing about WIF and its powerful base APIs and abstractions is that it can shield application logic from these details while you can focus on implementing the actual application. HTH

    Read the article

  • Why don't we just fix Javascript?

    - by Jan Meyer
    Javascript sucks because of a few fatalities well pointed out by Douglas Crockford. We talk a lot about it. But the point here is, why we don't fix it? Coffeescript of course does that and a lot more. But the question here is another: if we provide a webservice that can convert one version of Javascript to the next, and so on, we can keep the language up to date. Such a conversion allows old code to run, albeit with an ever-increasing startup delay, as newer browsers convert old code to the new syntax. To avoid that delay, the site only needs to take the output of the code-transform and paste it in! The effort has immediate benefits for those businesses interested in the results. The rest can sleep tight: their code will continue to run. If we provide backward code-transformation also, then elder browsers can also run ANY new code! Migration scripts should be created by those that make changes to a language. Today they don't, which is in itself a fundamental omission! It should be am obvious part of their job to provide them, as their job isn't really done without them. The onus of making it work should be on them. With this system Any site will be able to run in Any browser, but new code will run best on the newest browsers. This way we reap the benefit of an up-to-date and productive development environment, where today we suffer, supposedly because of yesterday. This is a misconception. We are all trapped in committee-thinking, and we drag along things that only worsen our performance over time! We cause an ever increasing complexity that is hard to underestimate. Javascript is easily fixed. The fact is we don't. As an example, I have seen Patrick Michaud tackle the migration problem in PmWiki. It included forward migration scripts. Whenever syntax changes were made, a migration script was added to transform pages to the new syntax. As far as I know, ALL migrations have worked flawlessly. In other words, we don't tackle the migration problem, we just drag it along. We are incompetent! And why is that? Because technically incompetent people feel they must decide for us. Because they are incompetent, fear rules them. They are obnoxiously conservative, and we suffer the consequence of bad leadership. But the competent don't need to play by the same rules. They can (and must) change them. They are the path forward. It is about time to leave the past behind, and pursue the leanest meanest, no, eternal functionality. That would in and of itself revolutionize programming. So, why don't we stop whining and fix programming? Begin with Javascript and change the world. Even if the browser doesn't hook into this system, coders could. So language updaters should take it upon them to provide migration scripts. Once they exist, browsers may take advantage of them.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • RemoveHandler Issues with Custom Events

    - by Jeff Certain
    This is a case of things being more complicated that I thought they should be. Since it took a while to figure this one out, I thought it was worth explaining and putting all of the pieces to the answer in one spot. Let me set the stage. Architecturally, I have the notion of generic producers and consumers. These put items onto, and remove items from, a queue. This provides a generic, thread-safe mechanism to load balance the creation and processing of work items in our application. Part of the IProducer(Of T) interface is: 1: Public Interface IProducer(Of T) 2: Event ItemProduced(ByVal sender As IProducer(Of T), ByVal item As T) 3: Event ProductionComplete(ByVal sender As IProducer(Of T)) 4: End Interface Nothing sinister there, is there? In order to simplify our developers’ lives, I wrapped the queue with some functionality to manage the produces and consumers. Since the developer can specify the number of producers and consumers that are spun up, the queue code manages adding event handlers as the producers and consumers are instantiated. Now, we’ve been having some memory leaks and, in order to eliminate the possibility that this was caused by weak references to event handles, I wanted to remove them. This is where it got dicey. My first attempt looked like this: 1: For Each producer As P In Producers 2: RemoveHandler producer.ItemProduced, AddressOf ItemProducedHandler 3: RemoveHandler producer.ProductionComplete, AddressOf ProductionCompleteHandler 4: producer.Dispose() 5: Next What you can’t see in my posted code are the warnings this caused. The 'AddressOf' expression has no effect in this context because the method argument to 'AddressOf' requires a relaxed conversion to the delegate type of the event. Assign the 'AddressOf' expression to a variable, and use the variable to add or remove the method as the handler.  Now, what on earth does that mean? Well, a quick Bing search uncovered a whole bunch of talk about delegates. The first solution I found just changed all parameters in the event handler to Object. Sorry, but no. I used generics precisely because I wanted type safety, not because I wanted to use Object. More searching. Eventually, I found this forum post, where Jeff Shan revealed a missing piece of the puzzle. The other revelation came from Lian_ZA in this post. However, these two only hinted at the solution. Trying some of what they suggested led to finally getting an invalid cast exception that revealed the existence of ItemProducedEventHandler. Hold on a minute! I didn’t create that delegate. There’s nothing even close to that name in my code… except the ItemProduced event in the interface. Could it be? Naaaaah. Hmmm…. Well, as it turns out, there is a delegate created by the compiler for each event. By explicitly creating a delegate that refers to the method in question, implicitly cast to the generated delegate type, I was able to remove the handlers: 1: For Each producer As P In Producers 2: Dim _itemProducedHandler As IProducer(Of T).ItemProducedEventHandler = AddressOf ItemProducedHandler 3: RemoveHandler producer.ItemProduced, _itemProducedHandler 4:  5: Dim _productionCompleteHandler As IProducer(Of T).ProductionCompleteEventHandler = AddressOf ProductionCompleteHandler 6: RemoveHandler producer.ProductionComplete, _productionCompleteHandler 7: producer.Dispose() 8: Next That’s “all” it took to finally be able to remove the event handlers and maintain type-safe code. Hopefully, this will save you the same challenges I had in trying to figure out how to fix this issue!

    Read the article

  • Admin Panel like Custom Framework

    - by bhuvin
    I want to Create a Framework , like Admin panel , which can rule almost all the aspects of what is shown on the frontend. For an (most basic) example: If suppose the links which are to be shown in a navigation area is passed from the server, with the order and the url , etc. The whole aim is to save the time on the tedious tasks. You can just start creating menus and start assigning pages to it. Give a url, actual files which are to be rendered (in case of static files.), in case of dynamic files, giving the file accordingly. And all this is fully server side manageable using different portlets, sort of things. So basic Roadmap is having : Areas like: Header Area - Which can contain logos, links etc. Navigation Area - Which can contains links and submenus. Content Area - Now this is where the tricky part is that that it has zones like: left, center & right. It contains Order in which it has to be displayed. So, when someday we want to change the way the articles appear on the page, we can do so easily, without any deployments. Now these zones can have n number of internal elements, like the word cloud, or the advertisement area. Footer Area: Again similar as Header Area. Currently there is a preexisting custom framework, which uses XSLT files for pulling out data from the server side. And it has the above capabilities. For example: If there's a grid it will be having a <table> tag embedded in the XSLT file. Now whatever might be the source of the data, we serialize this as XML and give it to the XSLT file and the html is derived from this and is appended to the layer in a page. The problem with this approach is: The XSLT conversion is occurring on the server side, so the server is responsible for getting the data, running XSLT transform, and append the html generated to the layer div. So, according to me, firstly this isn't the server's concern to do so. Secondly for larger applications this might be slower. Debugging isn't possible for XSLT transformation. So, whenever we face problems with data its always a bit of a trial & error method. Maintaining it is a bit of an eerie job i.e. styling changes, and other stuff. Adding dynamic values. Like JavaScript can't actually be very easily used in this. Secondly, we can't use JQuery or any other libraries with this since this is all occurring on the server. For now what I have thought about is using Templating - Javascript - JSON combination in place of XSLT, this will be offloaded to the client and the rendering will take place accordingly. This could solve the above problems and also could add mobile support for the same. Only problem which I could think of is that: It is much work and adding new portlets on the go needs to be looked into. What could be the alternatives for this? What kind of problems are there with the JavaScript approach? What are the different ways to implement the same? Are there any existing frameworks for similar usage?

    Read the article

  • How to change TXSDateTime SOAP serialization in Delphi 7?

    - by LukLed
    I am trying to use Java based webservice and have soap request: <?xml version="1.0"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body xmlns:NS1="http://something/"> <NS1:getRequest id="1"> <sessionId xsi:type="xsd:string"></sessionId> <reportType xsi:type="NS1:reportType">ALL</reportType> <xsd:dateFrom xsi:type="xsd:dateTime">2010-05-30T23:29:43.088+02:00</xsd:dateFrom> <xsd:dateTo xsi:type="xsd:dateTime">2010-05-31T23:29:43.728+02:00</xsd:dateTo> </NS1:getRequest> <parameters href="#1" /> </SOAP-ENV:Body> </SOAP-ENV:Envelope> It doesn't work, because webservice doesn't recognize dates as parameters. When I change <xsd:dateFrom xsi:type="xsd:dateTime">2010-05-30T23:29:43.088+02:00</xsd:dateFrom> <xsd:dateTo xsi:type="xsd:dateTime">2010-05-31T23:29:43.728+02:00</xsd:dateTo> to <dateFrom xsi:type="xsd:dateTime">2010-05-30T23:29:43.088+02:00</xsd:dateFrom> <dateTo xsi:type="xsd:dateTime">2010-05-31T23:29:43.728+02:00</xsd:dateTo> everything works ok, but Delphi (without Delphi source code changes) doesn't allow to change generated XML, it has only some options. Is it possible to set conversion options, so TSXDateTime is converted to <dateFrom, not <xsd:dateFrom tag? Did you meet that problem?

    Read the article

  • Serialize C# dynamic object to JSON object to be consumed by javascript

    - by Jeff Jin
    Based on the example c# dynamic with XML, I modified DynamicXml.cs and parsed my xml string. the modified part is as follows public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (binder.Name == "Controls") result = new DynamicXml(_elements.Elements()); else if (binder.Name == "Count") result = _elements.Count; else { var attr = _elements[0].Attribute( XName.Get(binder.Name)); if (attr != null) result = attr.Value; else { var items = _elements.Descendants( XName.Get(binder.Name)); if (items == null || items.Count() == 0) return false; result = new DynamicXml(items); } } return true; } The xml string to parse: "< View runat='server' Name='Doc111'>" + "< Caption Name='Document.ConvertToPdf' Value='Allow Conversion to PDF'></ Caption>" + "< Field For='Document.ConvertToPdf' ReadOnly='False' DisplayAs='checkbox' EditAs='checkbox'></ Field>" + "< Field For='Document.Abstract' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< Field For='Document.FileName' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< Field For='Document.KeyWords' ReadOnly='False' DisplayAs='label' EditAs='textinput'></ Field>" + "< FormButtons SaveCaption='Save' CancelCaption='Cancel'></ FormButtons>" + "</ View>"; dynamic form = new DynamicXml(markup_fieldsOnly); is there a way to serialize the content of this dynamic object(name value pairs inside dynamic) form as JSON object and sent to client side(browser)?

    Read the article

  • Negative ItemCount in SharePoint Document Library

    - by ccomet
    What can be done about negative numbers in library item counts? ItemCount is a read-only property, what are you supposed to do when it is drastically incorrect? Earlier last week, I was doing some testing involving the copying and moving of files and folders from one document library to another. I was transfering the items from our actual document library to a sandbox "Test" library that I used to run all sorts of object model and workflow testing in before migrating to the public lists and libraries. I noticed that with files, things worked correctly, but when I copied a folder that had a file inside it (using SPFolder.CopyTo()), the item count for the test library did not actually update. Since this testing was mostly playing around, I paid it little heed. Today I was back in the test library to test a different workflow (regarding PDF conversion). While I was there, I decided to delete the folder I left last week since I didn't need it anymore. And that's when I saw the item count for the list drop to -1 in the All Site Content View. When I deleted the new PDF I had just uploaded, it then dropped to -2! I even checked with the object model... getting an instance of the library I checked the ItemCount property... lo and behold it was also -2. Is there any process which runs in the background, kinda like the one that cleans up workflow history, which will correct this kind of issue? Or is a programmer expected to keep watch for this kind of situation and come up with calculations to compensate the "count penalty", as it were?

    Read the article

  • How do I filter out NaN FLOAT values in Teradata SQL?

    - by Paul Hooper
    With the Teradata database, it is possible to load values of NaN, -Inf, and +Inf into FLOAT columns through Java. Unfortunately, once those values get into the tables, they make life difficult when writing SQL that needs to filter them out. There is no IsNaN() function, nor can you "CAST ('NaN' as FLOAT)" and use an equality comparison. What I would like to do is, SELECT SUM(VAL**2) FROM DTM WHERE NOT ABS(VAL) > 1e+21 AND NOT VAL = CAST ('NaN' AS FLOAT) but that fails with error 2620, "The format or data contains a bad character.", specifically on the CAST. I've tried simply "... AND NOT VAL = 'NaN'", which also fails for a similar reason (3535, "A character string failed conversion to a numeric value."). I cannot seem to figure out how to represent NaN within the SQL statement. Even if I could represent NaN successfully in an SQL statement, I would be concerned that the comparison would fail. According to the IEEE 754 spec, NaN = NaN should evaluate to false. What I really seem to need is an IsNaN() function. Yet that function does not seem to exist.

    Read the article

  • VS2008 -> VS2010 leads to cryptic STL errors

    - by Jakobud
    The following C++ library was successfully compiled in VS2008 http://sourceforge.net/projects/xmlrpcc4win/files/xmlrpcc4win/XmlRpcC4Win1.0.8.zip/download When I open it in VS2010, it goes through the conversion wizard process without any errors. Now, when I attempt to compile it in VS2010, I get some weird STL errors like these: 1>TimXmlRpc.cpp(1018): error C2039: 'back_insert_iterator' : is not a member of 'std' 1>TimXmlRpc.cpp(1018): error C2065: 'back_insert_iterator' : undeclared identifier 1>TimXmlRpc.cpp(1018): error C2275: 'XmlRpcValue::BinaryData' : illegal use of this type as an expression 1>TimXmlRpc.cpp(1018): error C2065: 'ins' : undeclared identifier 1>TimXmlRpc.cpp(1018): error C2039: 'back_inserter' : is not a member of 'std' 1>TimXmlRpc.cpp(1018): error C3861: 'back_inserter': identifier not found 1>TimXmlRpc.cpp(1019): error C2065: 'ins' : undeclared identifier 1>TimXmlRpc.cpp(1031): error C2039: 'back_insert_iterator' : is not a member of 'std' 1>TimXmlRpc.cpp(1031): error C2065: 'back_insert_iterator' : undeclared identifier 1>TimXmlRpc.cpp(1031): error C2275: 'std::vector<_Ty>' : illegal use of this type as an expression 1> with 1> [ 1> _Ty=char 1> ] 1>TimXmlRpc.cpp(1031): error C2065: 'ins' : undeclared identifier 1>TimXmlRpc.cpp(1031): error C2039: 'back_inserter' : is not a member of 'std' 1>TimXmlRpc.cpp(1031): error C3861: 'back_inserter': identifier not found 1>TimXmlRpc.cpp(1032): error C2065: 'ins' : undeclared identifier I'm not sure what to make of some of these. For instance, back_insert_iterator is in fact a member of std, but VS doesn't seem to think it is. How do I fix errors like these? They just don't seem to make much sense so I'm not sure where to begin. Perhaps its something in my project settings? For example, here is line 1018, which gives the std error: std::back_insert_iterator<BinaryData> ins = std::back_inserter(*(u.asBinary)); If anyone could give me some direction I'd appreciate it. I'm new enough to C++ that I'm having a tough time figuring out this one.

    Read the article

  • Preserving case in HTTP headers with Ruby's Net:HTTP

    - by emh
    Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (http://github.com/lamp/paypal_adaptive_gateway) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body (i added the proxy line so i could see what was going on with Charles - http://www.charlesproxy.com/) When I look at the request headers in charles, this is what i see: X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.

    Read the article

  • How to do proper Unicode and ANSI output redirection on cmd.exe?

    - by Sorin Sbarnea
    If you are doing automation on windows and you are redirecting the output of different commands (internal cmd.exe or external, you'll discover that your log files contains combined Unicode and ANSI output (meaning that they are invalid and will not load well in viewers/editors). Is it is possible to make cmd.exe work with UTF-8? This question is not about display, s about stdin/stdout/stderr redirection and Unicode. I am looking for a solution that would allow you to: redirect the output of the internal commands to a file using UTF-8 redirect output of external commands supporting Unicode to the files but encoded as UTF-8. If it is impossible to obtain this kind of consistence using batch files, is there another way of solving this problem, like using python scripting for this? In this case, I would like to know if it is possible to do the Unicode detection alone (user using the scripting should not remember if the called tools will output Unicode or not, it will just expect to convert the output to UTF-8. For simplicity we'll assume that if the tool output is not-Unicode it will be considered as UTF-8 (no codepage conversion).

    Read the article

  • YUV Textures and Shaders

    - by Luca
    I've always used RGB textures. Now comes up the need of use of YUV textures (a set of three texture, specifying 1 luminance and 2 chrominance channels). Of course the YUV texture could be converted on CPU, getting the RGB texture usable as usual... but I need to get RGB pixel directly on GPU, to avoid unnecessary processor load... The problem became strange, since I require to specifyin the shader source, because a single texture, the following items: Three samplers uniforms, one for each channel Two integer uniforms, for specifying the chrominance channels sampling a mat3 uniform, for specific YUV to RGB conversion matrix. This should be done for each YUV texture... Is it possible to "compress" required uniforms, and getting RGB values quite easily? Actually i think this could aid: Texture sizes, including mipmaps, could be queried. With this, its possible to save the two integer uniforms, since the uniform values are derived the ratio between texture extents The mat3 uniforms could be collected as globals, and with preprocessor could be selected. But what design should I use for specify three (related) textures? Is it possible to use textures levels for accessing multiple textures? Texture arrays could be usable? And what about using rectangle textures, which doesn't supports mipmaps? Maybe a shader abstraction (struct definition and related function) could aid? Thank you.

    Read the article

  • what's wrong with sql statements sqlite3_column_text ??

    - by ahmet732
    if(sqlite3_open([databasePath UTF8String], & database) == SQLITE_OK) { NSLog(@"DB OPENED"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement ="select name from Medicine"; sqlite3_stmt *compiledStatement; int result = sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, nil); NSLog(@"RESULT %d", result); if(result == SQLITE_OK) { NSLog(@"RESULT IS OK..."); // Loop through the results and add them to the feeds array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { // Read the data from the result row NSLog(@"WHILE IS OK"); **araci = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)];** NSLog(@"Data read"); NSLog(@"wow: %",araci); }//while }//if sqlite_prepare v2 // Release the compiled statement from memory sqlite3_finalize(compiledStatement); NSLog(@"compiled stmnt finalized.."); } sqlite3_close(database); NSLog(@"MA_DB CLOSED"); Questions 1 ) Everything is alright until bold code. when i run the app, throws an exception below. I have one table and one row in my db relatively, id (integer), name (varchar), desc (varchar) (I created my table using SQLite Manager). 2-) What is text type in SQLite Manager? Is that NSString? Also how can I put the string values(in table) to an NSMutableArray ? (I am really having difficulty in conversion.) *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[NSString stringWithUTF8String:]: NULL cString'

    Read the article

  • Spring Test / JUnit problem - unable to load application context

    - by HDave
    I am using Spring for the first time and must be doing something wrong. I have a project with several Bean implementations and now I am trying to create a test class with Spring Test and JUnit. I am trying to use Spring Test to inject a customized bean into the test class. Here is my test-applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="............."> <bean id="MyUuidFactory" class="com.myapp.UuidFactory" scope="singleton" > <property name="typeIdentifier" value="CLS" /> </bean> <bean id="ThingyImplTest" class="com.myapp.ThingyImplTest" scope="singleton"> <property name="uuidFactory"> <idref local="MyUuidFactory" /> </property> </bean> </beans> The injection of MyUuidFactory instance goes along with the following code from within the test class: private UuidFactory uuidFactory; public void setUuidFactory(UuidFactory uuidFactory) { this.uuidFactory = uuidFactory; } However, when I go to run the test (in Eclipse or command line) I get the following error (stack trace omitted for brevity): Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'MyImplTest' defined in class path resource [test-applicationContext.xml]: Initialization of bean failed; nested exception is org.springframework.beans.ConversionNotSupportedException: Failed to convert property value of type 'java.lang.String' to required type 'com.myapp.UuidFactory' for property 'uuidFactory'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [com.myapp.UuidFactory] for property 'uuidFactory': no matching editors or conversion strategy found Funny thing is, the Eclipse/Spring XML editor shows errors of I misspell any of the types or idrefs. If I leave the bean in, but comment out the dependency injection, everything work until I get a NullPointerException while running the test...which makes sense.

    Read the article

  • How to Process Lambda Expressions Passed as Argument Into Method - C# .NET 3.5

    - by Sunday Ironfoot
    My knowledge of Lambda expressions is a bit shaky, while I can write code that uses Lambda expressions (aka LINQ), I'm trying to write my own method that takes a few arguments that are of type Lambda Expression. Background: I'm trying to write a method that returns a Tree Collection of objects of type TreeItem from literally ANY other object type. I have the following so far: public class TreeItem { public string Id { get; set; } public string Text { get; set; } public TreeItem Parent { get; protected set; } public IList<TreeItem> Children { get { // Implementation that returns custom TreeItemCollection type } } public static IList<TreeItem> GetTreeFromObject<T>(IList<T> items, Expression<Func<T, string>> id, Expression<Func<T, string>> text, Expression<Func<T, IList<T>>> childProperty) where T : class { foreach (T item in items) { // Errrm!?? What do I do now? } return null; } } ...which can be called via... IList<TreeItem> treeItems = TreeItem.GetTreeFromObject<Category>( categories, c => c.Id, c => c.Name, c => c.ChildCategories); I could replace the Expressions with string values, and just use reflection, but I'm trying to avoid this as I want to make it strongly typed. My reasons for doing this is that I have a control that accepts a List of type TreeItem, whereas I have dozens of different types that are all in a tree like structure, and don't want to write seperate conversion methods for each type (trying to adhere to the DRY principle). Am I going about this the right way? Is there a better way of doing this perhaps?

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken;

    - by Elad Benda
    I have just followed this tutorial: https://developers.google.com/drive/quickstart-android so I don't see a reason for duplicated libs in my project. I have added the drive Client lib via Google plugin for eclipse When I build my android app with this manifest <uses-sdk android:minSdkVersion="15" android:targetSdkVersion="16" /> <uses-permission android:name="android.permission.READ_CALENDAR" /> <uses-permission android:name="android.permission.WRITE_CALENDAR" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.GET_ACCOUNTS"/> <uses-permission android:name="android.permission.INTERNET" /> <application android:icon="@drawable/todo" android:label="@string/app_name" > <activity android:name=".TodosOverviewActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".TodoDetailActivity" android:windowSoftInputMode="stateVisible|adjustResize" > <intent-filter> <action android:name="android.intent.action.SEND" /> <category android:name="android.intent.category.DEFAULT" /> <data android:mimeType="image/*" /> </intent-filter> </activity> <provider android:name=".contentprovider.MyTodoContentProvider" android:authorities="de.vogella.android.todos.contentprovider" > </provider> </application> I get the following error: [2013-10-27 00:43:58 - Dex Loader] Unable to execute dex: Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken; [2013-10-27 00:43:58 - de.vogella.android.todos] Conversion to Dalvik format failed: Unable to execute dex: Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken; how can I fix this?

    Read the article

  • What's the "best" database for embedded?

    - by mawg
    I'm an embedded guy, not a database guy. I've been asked to redesign an existing system which has bottlenecks in several places. The embedded device is based around an ARM 9 processor running at 220mHz. There should be a database of 50k entries (may increase to 250k) each with 1k of data (max 8 filed). That's approximate - I can try to get more precise figures if necessary. They are currently using SqlLite 2 and planning to move to SqlLite 3. Without starting a flame war - I am a complete d/b newbie just seeking advice - is that the "best" decision? I realize that this might be a "how long is a piece of string?" question, but any pointers woudl be greatly welcomed. I don't mind doing a lot of reading & research, but just hoped that you could get me off to a flying start. Thanks. p.s Again, a total rewrite, might not even stick with embedded Linux, but switch to eCos, don't worry too much about one time conversion between d/b formats. Oh, and accesses should be infrequent, at most one every few seconds. edit: ok, it seems they have 30k entries (may reach 100k or more) of only 5 or 6 fields each, but at least 3 of them can be a search key for a record. They are toying with "having no d/b at all, since the data are so simple", but it seems to me that with multiple keys, we couldn't use fancy stuff like a quicksort() type search (recursive, binary search). Any thoughts on "no d/b", just data-structures? Btw, one key is 800k - not sure how well SqlLite handles that (maybe with "no d/b" I have to hash that 800k to something smaller?)

    Read the article

  • running asp.net 3.5 and asp.net 2.0 in same site

    - by cori
    We're running ASP.Net 2.0 on our corporate web site, and I'd like to get it up to ASP.Net 3.5 as smoothly as possible. The project/solution architecture in VS 2005 is an ASP.Net 2.0 web project and an .Net 2.0 data access layer project which is used by the site code. Upon opening the projects in a new VS 2008 solution they seemed to be converted to .Net 3.5 with a minimum of fuss - they built correctly out of the box, deployed successfully, and seem to work just fine, which is exactly as I would expect given that .Net 2.0 and 3.5 share a common runtime. The major difference after the conversion is that the web.config file's referenced dlls are now the 3.5 versions. What I would like to do is to update the site piecemeal; as I make modifications to a given page send the 3.5 verson of that page over to our webserver and not update the whole site at once. In testing on our dev box this approach seems to be working fine - the site code is interacting with the .Net 3.5 data access layer without difficulty, a handful of pages are running 3.5 page-behind code (by this I mean that they're running assemblies built in VS 2008 - the site is using single-page assemblies for code behind), the 3.5 web.config is in place, and the bulk of the site is running code-behind assemblies built in VS2005. Everything looks great. Which makes me worried that I'm missing something. Is this architecture workable, or is there a problem lying is wait for m that I haven't considered?

    Read the article

  • Manual drag-drop operations in Flex

    - by Yarin
    This is a two-part problem: A) I'm implementing several irregular drag-drop operations in Flex (e.g. DataGrid ItemRenderer into Tree). My preference was modifying DragManager operations to meet my needs, and in fact using DragManager allows me to do everything I need, but I'm having serious issues with performance. For example, dragging anything over a many-columned DataGrid, whether the drag was initiated with DragManager.doDrag, or just using native ListBase drag-drop functionality, slows the drag movement to a crawl. Even if the DataGrid is disabled/ not listenening for any move/drag events, this happens. On the other hand, if the drag is initiated by calling .startDrag() on the Sprite, the drag is smooth and performs great over DataGrids and everything else. So part A would be: Is there a reason why .startDrag() operations work so well, while drags initiated through DragManager.doDrag suffer so badly when over certain components? B) If indeed the solution is to handle drag-drops using .startDrag(), how would I go about determining what component the mouse is over when the drag is released? In my example, my dragged object is brought up to the top level of the display list, and so is being moved around in stage coordinates. mouseMove, mouseOver events don't fire on the components I'm dragging over because the mouse is constantly over the dragged component, so I would need some sort of stage.coordinate - visibleComponentAtThatCoordinate conversion. Any thoughts on this? Thanks alot!-- Yarin

    Read the article

  • Unable to convert from Julian INT date to regular TSQL Datetime

    - by Bluehiro
    Help me Stackoverflow, I'm close to going all "HULK SMASH" on my keyboard over this issue. I have researched carefully but I'm obviously not getting something right. I am working with a Julian dates referenced from a proprietary tool (Platinum SQL?), though I'm working in SQL 2005. I can convert their "special" version of Julian into datetime when I run a select statement. Unfortunately it will not insert into a datetime column, I get the following error when I try: The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value. So I can't setup datetime criteria for running a report off of the Stored Procedure. Original Value: 733416 Equivalent Calendar Value: 01-09-2009 Below is my code... I'm so close but I can't quite see what's wrong, I need my convert statement to actually convert the Julian value (733416) into a compatible TSQL DATETIME value. SELECT org_id, CASE WHEN date_applied = 0 THEN '00-00-00' ELSE convert(char(50),dateadd(day,date_applied-729960,convert(datetime, '07-25-99')),101) END AS date_applied, CASE WHEN date_posted = 0 THEN '00-00-00' ELSE convert(char(50),dateadd(day,date_posted-729960,convert(datetime, '07-25-99')),101) END AS date_posted from general_vw

    Read the article

  • Android ndk-build command does nothing

    - by James
    I have a similar question to that posted here: Android NDK: why ndk-build doesn't generate .so file and a new libs folder in Eclipse? ...though I am running Windows 7, not Mac os. Essentially the ndk-build command is run, gives no error but doesn't create an .so file (also, since I'm on windows this should create a .dll and not an .so?). I tried running the command from the root, jni, src folders etc. but got the same result; cmd just returns to the prompter after a few seconds. I ran it again from the jni folder with NDK_LOG=1 parameter to see what was happening. Here is a portion of the transcript of the log results after running ndk-build in the jni folder (after it successfully identified the platform, etc.)... Android NDK: Looking for jni/Android.mk in /workspace/NdkFooActivity/jni Android NDK: Looking for jni/Android.mk in /workspace/NdkFooActivity Android NDK: Found it ! Android NDK: Found project path: /workspace/NdkFooActivity Android NDK: Ouput path: /workspace/NdkFooActivity/obj Android NDK: Parsing /cygdrive/c/android-ndk-r8/build/core/default-application.mk Android NDK: Found APP_PLATFORM=android-15 in /workspace/NdkFooActivity/project.properties Android NDK: Application local targets unknown platform 'android-15' Android NDK: Switching to android-14 Android NDK: Using build script /workspace/NdkFooActivity/jni/Android.mk Android NDK: Application 'local' is not debuggable Android NDK: Selecting release optimization mode (app is not debuggable) Android NDK: Adding import directory: /cygdrive/c/android-ndk-r8/sources Android NDK: Building application 'local' for ABI 'armeabi' Android NDK: Using target toolchain 'arm-linux-androideabi-4.4.3' for 'armeabi' ABI Android NDK: Looking for imported module with tag 'cxx-stl/system' Android NDK: Probing /cygdrive/c/android-ndk-r8/sources/cxx-stl/system/Android.mk Android NDK: Found in /cygdrive/c/android-ndk-r8/sources/cxx-stl/system Android NDK: Cygwin dependency file conversion script: ...after which point it just runs the script mentioned in the last line, then terminates. Any ideas? Thanks!

    Read the article

  • C# Check if character exists in encoding

    - by Alvin Wong
    I am writing a program that a part renders a bitmap font in CP437. In a function that renders the text with I want to be able to check whether a char is available in CP437 before the encoding conversion, like: public static void DrawCharacter(this Graphics g, char c) { if (char_exist_in_encoding(Encoding.GetEncoding(437), c) { byte[] src = Encoding.Unicode.GetBytes(c.ToString()); byte[] dest = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(437), src); DrawCharacter(g, dest[0]); // Call the void(this Graphics, byte) overload } } Without the check, any characters outside CP437 will result in a '?' (63, 0x3F). I want to hide any invalid characters completely. Is there an implementation of char_exist_in_encoding other than the following stupid approach? public static bool char_exist_in_encoding(Encoding e, char c) { if (c == '?') return true; byte[] src = Encoding.Unicode.GetBytes(c.ToString()); byte[] dest = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(437), src); if (dest[0] == 0x3F) return false; return true; } Perhaps not very relevant, but the bitmap is created like this: Bitmap b = new Bitmap(256 * 8, 16); Graphics g = Graphics.FromImage(b); g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.SingleBitPerPixelGridFit; Font f = new Font("Whatever 8x16 bitmap font", 16, GraphicsUnit.Pixel); for (byte i = 0; i < 255; i++) { byte[] arr = Encoding.Convert(Encoding.GetEncoding(437), Encoding.Unicode, new byte[] { i }); char c = Encoding.Unicode.GetChars(arr)[0]; g.DrawString(c.ToString(), f, Brushes.Black, i * 8 - 3, 0); // Don't know why it needs a 3px offset } b.Save(@"D:\chars.png");

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >