Search Results

Search found 4236 results on 170 pages for 'validation'.

Page 158/170 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • Benefits of Behavior Driven Development

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/26/benefits-of-behavior-driven-development.aspxContinuing my previous article on BDD, I wanted to point out some benefits of BDD and since BDD is an extension of Test Driven Development (TDD), you get those as well. I’ll add another article on some possible downsides of this approach. There are many articles about the benefits of TDD and they apply to BDD. I’ve pointed out some here and copied some of the main points for each article, but there are many more including the book The Art of Unit Testing by Roy Osherove. http://geekswithblogs.net/leesblog/archive/2008/04/30/the-benefits-of-test-driven-development.aspx (Lee Brandt) Stability Accountability Design Ability Separated Concerns Progress Indicator http://tddftw.com/benefits-of-tdd/ Help maintainers understand the intention behind the code Bring validation and proper data handling concerns to the forefront. Writing the tests first is fun. Better APIs come from writing testable code. TDD will make you a better developer. http://www.slideshare.net/dhelper/benefit-from-unit-testing-in-the-real-world (from Typemock). Take a look at the slides, especially the extra time required for TDD (slide 10) and the next one of the bugs avoided using TDD (slide 11). Less bugs (slide 11) about testing and development (13) Increase confidence in code (14) Fearlessly change your code (14) Document Requirements (14) also see http://visualstudiomagazine.com/articles/2013/06/01/roc-rocks.aspx Discover usability issues early (14) All these points and articles are great and there are many more. The following are my additions to the benefits of BDD from using it in real projects for my company. July 2013 on MSDN - Behavior-Driven Design with SpecFlow Scott Allen did a very informative TDD and MVC module, but to me he is doing BDDCompile and Execute Requirements in Microsoft .NET ~ Video from TechEd 2012 Communication I was working through a complicated task that the decision tree kept growing. After writing out the Given, When, Then of the scenario, I was able tell QA what I had worked through for their initial test cases. They were able to add from there. It is also useful to use this language with other developers, managers, or clients to help make informed decisions on if it meets the requirements or if it can simplified to save time (money). Thinking through solutions, before starting to code This was the biggest benefit to me. I like to jump into coding to figure out the problem. Many times I don't understand my path well enough and have to do some parts over. A past supervisor told me several times during reviews that I need to get better at seeing "the forest for the trees". When I sit down and write out the behavior that I need to implement, I force myself to think things out further and catch scenarios before they get to QA. A co-worker that is new to BDD and we’ve been using it in our new project for the last 6 months, said “It really clarifies things”. It took him awhile to understand it all, but now he’s seeing the value of this approach (yes there are some downsides, but that is a different issue). Developers’ Confidence This is huge for me. With tests in place, my confidence grows that I won’t break code that I’m not directly changing. In the past, I’ve worked on projects with out tests and we would frequently find regression bugs (or worse the users would find them). That isn’t fun. We don’t catch all problems with the tests, but when QA catches one, I can write a test to make sure it doesn’t happen again. It’s also good for Releasing code, telling your manager that it’s good to go. As time goes on and the code gets older, how confident are you that checking in code won’t break something somewhere else? Merging code - pre release confidence If you’re merging code a lot, it’s nice to have the tests to help ensure you didn’t merge incorrectly. Interrupted work I had a task that I started and planned out, then was interrupted for a month because of different priorities. When I started it up again, and un-shelved my changes, I had the BDD specs and it helped me remember what I had figured out and what was left to do. It would have much more difficult without the specs and tests. Testing and verifying complicated scenarios Sometimes in the UI there are scenarios that get tricky, because there are a lot of steps involved (click here to open the dialog, enter the information, make sure it’s valid, when I click cancel it should do {x}, when I click ok it should close and do {y}, then do this, etc….). With BDD I can avoid some of the mouse clicking define the scenarios and have them re-run quickly, without using a mouse. UI testing is still needed, but this helps a bunch. The same can be true for tricky server logic. Documentation of Assumptions and Specifications The BDD spec tests (Jasmine or SpecFlow or other tool) also work as documentation and show what the original developer was trying to accomplish. It’s not a different Word document, so developers will keep this up to date, instead of letting it become obsolete. What happens if you leave the project (consulting, new job, etc) with no specs or at the least good comments in the code? Sometimes I think of a new scenario, so I add a failing spec and continue in the same stream of thought (don’t forget it because it was on a piece of paper or in a notepad). Then later I can come back and handle it and have it documented. Jasmine tests and JavaScript –> help deal with the non-typed system I like JavaScript, but I also dislike working with JavaScript. I miss C# telling me if a property doesn’t actually exist at build time. I like the idea of TypeScript and hope to use it more in the future. I also use KnockoutJs, which has observables that need to be called with ending (), since the observable is a function. It’s hard to remember when to use () or not and the Jasmine specs/tests help ensure the correct usage.   This should give you an idea of the benefits that I see in using the BDD approach. I’m sure there are more. It talks a lot of practice, investment and experimentation to figure out how to approach this and to get comfortable with it. I agree with Scott Allen in the video I linked above “Remember that TDD can take some practice. So if you're not doing test-driven design right now? You can start and practice and get better. And you'll reach a point where you'll never want to get back.”

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • Why isn't the Spring AOP XML schema properly loaded when Tomcat loads & reads beans.xml

    - by chrisbunney
    I'm trying to use Spring's Schema Based AOP Support in Eclipse and am getting errors when trying to load the configuration in Tomcat. There are no errors in Eclipse and auto-complete works correctly for the aop namespace, however when I try to load the project into eclipse I get this error: 09:17:59,515 WARN XmlBeanDefinitionReader:47 - Ignored XML validation warning org.xml.sax.SAXParseException: schema_reference.4: Failed to read schema document 'http://www.springframework.org/schema/aop/spring-aop-2.5.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not . Followed by: SEVERE: StandardWrapper.Throwable org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 39 in XML document from /WEB-INF/beans.xml is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'aop:config'. Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'aop:config'. Based on this, it seems the schema is not being read when Tomcat parses the beans.xml file, leading to the <aop:config> element not being recognised. My beans.xml file is as follows: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxws="http://cxf.apache.org/jaxws" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd"> <!--import resource="classpath:META-INF/cxf/cxf.xml" /--> <!--import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" /--> <!--import resource="classpath:META-INF/cxf/cxf-servlet.xml" /--> <!-- NOTE: endpointName attribute maps to wsdl:port@name & should be the same as the portName attribute in the @WebService annotation on the IWebServiceImpl class --> <!-- NOTE: serviceName attribute maps to wsdl:service@name & should be the same as the serviceName attribute in the @WebService annotation on the ASDIWebServiceImpl class --> <!-- NOTE: address attribute is the actual URL of the web service (relative to web app location) --> <jaxws:endpoint xmlns:tns="http://iwebservices.ourdomain/" id="iwebservices" implementor="ourdomain.iwebservices.IWebServiceImpl" endpointName="tns:IWebServiceImplPort" serviceName="tns:IWebService" address="/I" wsdlLocation="wsdl/I.wsdl"> <!-- To have CXF auto-generate WSDL on the fly, comment out the above wsdl attribute --> <jaxws:features> <bean class="org.apache.cxf.feature.LoggingFeature" /> </jaxws:features> </jaxws:endpoint> <aop:config> <aop:aspect id="myAspect" ref="aBean"> </aop:aspect> </aop:config> </beans> The <aop:config> element in my beans.xml file is copy-pasted from the Spring website to try and remove any possible source of error Can anyone shed any light on why this error is occurring and what I can do to fix it?

    Read the article

  • WPF Styles and Tooltips Question

    - by A.R.
    I have a style that I am using to make dynamic tooltips on certain text boxes like so. <Style TargetType="{x:Type TextBox}"> <Setter Property="MinWidth" Value="100"/> <Style.Triggers> <Trigger Property="Validation.HasError" Value="True"> <!-- item of interest --> <Setter Property="ToolTip"> <Setter.Value> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="Tag"/> </MultiBinding> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> This works very well, but if I want to use a more complex tooltip I can't figure out how to bind to 'Tag' anymore for the converter value. For example; ... <Setter Property="ToolTip"> <Setter.Value> <StackPanel> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <!-- item of interest --> <Binding RelativeSource=" what goes here?? "/> </MultiBinding> </TextBlock.Text> </TextBlock> <Image/> </StackPanel> </Setter.Value> </Setter> ... I have tried several flavors of 'FindAncestor' and what not for the relative source, but I can't get anything to work. Any ideas?? UPDATE: 12-29-2010 : Here is the correct code, answer provided by our friend Goblin below. Works perfectly! ... <Setter Property="ToolTip"> <Setter.Value> <!-- Item of interest --> <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}"> <StackPanel> <Image/> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding Path="Tag"/> </MultiBinding> </TextBlock.Text> </TextBlock> </StackPanel> </ToolTip> </Setter.Value> </Setter> ...

    Read the article

  • "Content is not allowed in prolog" when parsing perfectly valid XML on GAE

    - by Adrian Petrescu
    Hey guys, I've been beating my head against this absolutely infuriating bug for the last 48 hours, so I thought I'd finally throw in the towel and try asking here before I throw my laptop out the window. I'm trying to parse the response XML from a call I made to AWS SimpleDB. The response is coming back on the wire just fine; for example, it may look like: <?xml version="1.0" encoding="utf-8"?> <ListDomainsResponse xmlns="http://sdb.amazonaws.com/doc/2009-04-15/"> <ListDomainsResult> <DomainName>Audio</DomainName> <DomainName>Course</DomainName> <DomainName>DocumentContents</DomainName> <DomainName>LectureSet</DomainName> <DomainName>MetaData</DomainName> <DomainName>Professors</DomainName> <DomainName>Tag</DomainName> </ListDomainsResult> <ResponseMetadata> <RequestId>42330b4a-e134-6aec-e62a-5869ac2b4575</RequestId> <BoxUsage>0.0000071759</BoxUsage> </ResponseMetadata> </ListDomainsResponse> I pass in this XML to a parser with XMLEventReader eventReader = xmlInputFactory.createXMLEventReader(response.getContent()); and call eventReader.nextEvent(); a bunch of times to get the data I want. Here's the bizarre part -- it works great inside the local server. The response comes in, I parse it, everyone's happy. The problem is that when I deploy the code to Google App Engine, the outgoing request still works, and the response XML seems 100% identical and correct to me, but the response fails to parse with the following exception: com.amazonaws.http.HttpClient handleResponse: Unable to unmarshall response (ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog.): <?xml version="1.0" encoding="utf-8"?> <ListDomainsResponse xmlns="http://sdb.amazonaws.com/doc/2009-04-15/"><ListDomainsResult><DomainName>Audio</DomainName><DomainName>Course</DomainName><DomainName>DocumentContents</DomainName><DomainName>LectureSet</DomainName><DomainName>MetaData</DomainName><DomainName>Professors</DomainName><DomainName>Tag</DomainName></ListDomainsResult><ResponseMetadata><RequestId>42330b4a-e134-6aec-e62a-5869ac2b4575</RequestId><BoxUsage>0.0000071759</BoxUsage></ResponseMetadata></ListDomainsResponse> javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(Unknown Source) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(Unknown Source) at com.amazonaws.transform.StaxUnmarshallerContext.nextEvent(StaxUnmarshallerContext.java:153) ... (rest of lines omitted) I have double, triple, quadruple checked this XML for 'invisible characters' or non-UTF8 encoded characters, etc. I looked at it byte-by-byte in an array for byte-order-marks or something of that nature. Nothing; it passes every validation test I could throw at it. Even stranger, it happens if I use a Saxon-based parser as well -- but ONLY on GAE, it always works fine in my local environment. It makes it very hard to trace the code for problems when I can only run the debugger on an environment that works perfectly (I haven't found any good way to remotely debug on GAE). Nevertheless, using the primitive means I have, I've tried a million approaches including: XML with and without the prolog With and without newlines With and without the "encoding=" attribute in the prolog Both newline styles With and without the chunking information present in the HTTP stream And I've tried most of these in multiple combinations where it made sense they would interact -- nothing! I'm at my wit's end. Has anyone seen an issue like this before that can hopefully shed some light on it? Thanks!

    Read the article

  • Visual Basic Cryptography Question

    - by Glenn Sullivan
    I am trying to mimic the results of some C code that uses the OpenSSL library using the system.security.crytography library in the .net 3.5 world, and I can't seem to get it right. I need some help... part of the issue is my understanding of crytography in general. Here's what is supposed to happen: I send a request for authentication to a device. It returns a challenge digest, which I then need to sign with a known key and return The device returns a "success" or "Fail" message. I have the following code snippet that I am trying to "copy": //Seed the PRNG //Cheating here - the PRNG will be seeded when we create a key pair //The key pair is discarded only doing this to seed the PRNG. DSA *temp_dsa = DSA_new(); if(!temp_dsa) { printf("Error: The client had an error with the DSA API\n"); exit(0); } unsigned char seed[20] = "Our Super Secret Key"; temp_dsa = DSA_generate_parameters(128, seed, sizeof(seed), NULL, NULL, NULL, NULL); DSA_free(temp_dsa); //A pointer to the private key. p = (unsigned char *)&priv_key; //Create and allocate a DSA structure from the private key. DSA *priv_dsa = NULL; priv_dsa = d2i_DSAPrivateKey(NULL, &p, sizeof(priv_key)); if(!priv_dsa) { printf("Error: The client had an error with the DSA API\n"); exit(0); } //Allocate memory for the to be computed signature. sigret = OPENSSL_malloc(DSA_size(priv_dsa)); //Sign the challenge digest recieved from the ISC. retval = DSA_sign(0, pResp->data, pResp->data_length, sigret, &siglen, priv_dsa); A few more bits of information: priv_key is a 252 element character array of hex characters that is included. The end result is a 512 (or less) array of characters to send back for validation to the device. Rasmus asked to see the key array. Here it is: unsigned char priv_key[] = {0x30, 0x81, 0xf9, 0x02, 0x01, 0x00, 0x02, 0x41, 0x00, 0xfe, 0xca, 0x97, 0x55, 0x1f, 0xc0, 0xb7, 0x1f, 0xad, 0xf0, 0x93, 0xec, 0x4b, 0x31, 0x94, 0x78, 0x86, 0x82, 0x1b, 0xab, 0xc4, 0x9e, 0x5c, 0x40, 0xd9, 0x89, 0x7d, 0xde, 0x43, 0x38, 0x06, 0x4f, 0x1b, 0x2b, 0xef, 0x5c, 0xb7, 0xff, 0x21, 0xb1, 0x11, 0xe6, 0x9a, 0x81, 0x9a, 0x2b, 0xef, 0x3a, 0xbb, 0x5c, 0xea, 0x76, 0xae, 0x3a, 0x8b, 0x92, 0xd2, 0x7c, 0xf1, 0x89, 0x8e, 0x4d, 0x3f, 0x0d, 0x02, 0x15, 0x00, 0x88, 0x16, 0x1b, 0xf5, 0xda, 0x43, 0xee, 0x4b, 0x58, 0xbb, 0x93, 0xea, 0x4e, 0x2b, 0xda, 0xb9, 0x17, 0xd1, 0xff, 0x21, 0x02, 0x41, 0x00, 0xf6, 0xbb, 0x45, 0xea, 0xda, 0x72, 0x39, 0x4f, 0xc1, 0xdd, 0x02, 0xb4, 0xf3, 0xaa, 0xe5, 0xe2, 0x76, 0xc7, 0xdc, 0x34, 0xb2, 0x0a, 0xd8, 0x69, 0x63, 0xc3, 0x40, 0x2c, 0x58, 0xea, 0xa6, 0xbd, 0x24, 0x8b, 0x6b, 0xaa, 0x4b, 0x41, 0xfc, 0x5f, 0x21, 0x02, 0x3c, 0x27, 0xa9, 0xc7, 0x7a, 0xc8, 0x59, 0xcd, 0x5b, 0xdd, 0x6c, 0x44, 0x48, 0x86, 0xd1, 0x34, 0x46, 0xb0, 0x89, 0x55, 0x50, 0x87, 0x02, 0x41, 0x00, 0x80, 0x29, 0xc6, 0x4a, 0x08, 0x3e, 0x30, 0x54, 0x71, 0x9b, 0x95, 0x49, 0x55, 0x17, 0x70, 0xc7, 0x96, 0x65, 0xc8, 0xc2, 0xe2, 0x8a, 0xe0, 0x5d, 0x9f, 0xe4, 0xb2, 0x1f, 0x20, 0x83, 0x70, 0xbc, 0x88, 0x36, 0x03, 0x29, 0x59, 0xcd, 0xc7, 0xcd, 0xd9, 0x4a, 0xa8, 0x65, 0x24, 0x6a, 0x77, 0x8a, 0x10, 0x88, 0x0d, 0x2f, 0x15, 0x4b, 0xbe, 0xba, 0x13, 0x23, 0xa1, 0x73, 0xa3, 0x04, 0x37, 0xc9, 0x02, 0x14, 0x06, 0x8e, 0xc1, 0x41, 0x40, 0xf1, 0xf6, 0xe1, 0xfa, 0xfb, 0x64, 0x28, 0x02, 0x15, 0xce, 0x47, 0xaa, 0xce, 0x6e, 0xfe}; Can anyone help me translate this code to it's VB.net crypto equivalent? TIA, Glenn

    Read the article

  • How to write a test for accounts controller for forms authenticate

    - by Anil Ali
    Trying to figure out how to adequately test my accounts controller. I am having problem testing the successful logon scenario. Issue 1) Am I missing any other tests.(I am testing the model validation attributes separately) Issue 2) Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() and Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() test are not passing. I have narrowed it down to the line where i am calling _membership.validateuser(). Even though during my mock setup of the service i am stating that i want to return true whenever validateuser is called, the method call returns false. Here is what I have gotten so far AccountController.cs [HandleError] public class AccountController : Controller { private IMembershipService _membershipService; public AccountController() : this(null) { } public AccountController(IMembershipService membershipService) { _membershipService = membershipService ?? new AccountMembershipService(); } [HttpGet] public ActionResult LogOn() { return View(); } [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (_membershipService.ValidateUser(model.UserName,model.Password)) { if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); } return RedirectToAction("Index", "Overview"); } ModelState.AddModelError("*", "The user name or password provided is incorrect."); } return View(model); } } AccountServices.cs public interface IMembershipService { bool ValidateUser(string userName, string password); } public class AccountMembershipService : IMembershipService { public bool ValidateUser(string userName, string password) { throw new System.NotImplementedException(); } } AccountControllerFacts.cs public class AccountControllerFacts { public static AccountController GetAccountControllerForLogonSuccess() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(true); return controller; } public static AccountController GetAccountControllerForLogonFailure() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(false); return controller; } public class LogOn { [Fact] public void Get_ReturnsViewResultWithDefaultViewName() { // Arrange var controller = GetAccountControllerForLogonSuccess(); // Act var result = controller.LogOn(); // Assert Assert.IsType<ViewResult>(result); Assert.Empty(((ViewResult)result).ViewName); } [Fact] public void Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, null); var redirectresult = (RedirectToRouteResult) result; // Assert Assert.IsType<RedirectToRouteResult>(result); Assert.Equal("Overview", redirectresult.RouteValues["controller"]); Assert.Equal("Index", redirectresult.RouteValues["action"]); } [Fact] public void Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var redirectResult = (RedirectResult) result; // Assert Assert.IsType<RedirectResult>(result); Assert.Equal("someurl", redirectResult.Url); } [Fact] public void Put_ReturnsViewIfInvalidModelState() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); controller.ModelState.AddModelError("*","Invalid model state."); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); } [Fact] public void Put_ReturnsViewIfLogonFailed() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); Assert.Equal(false,viewResult.ViewData.ModelState.IsValid); } } }

    Read the article

  • Connection Reset on MySQL query

    - by sunwukung
    OK, I'm flummoxed. I'm trying to execute a query on a database (locally) and I keep getting a connection reset error. I've been using the method below in a generic DAO class to build a query string and pass to Zend_Db API. public function insert($params) { $loop = false; $keys = $values = ''; foreach($params as $k => $v){ if($loop == true){ $keys .= ','; $values .= ','; } $keys .= $this->db->quoteIdentifier($k); $values .= $this->db->quote($v); $loop = true; } $sql = "INSERT INTO " . $this->table_name . " ($keys) VALUES ($values)"; //formatResult returns an array of info regarding the status and any result sets of the query //I've commented that method call out anyway, so I don't think it's that try { $this->db->query($sql); return $this->formatResult(array( true, 'New record inserted into: '.$this->table_name )); }catch(PDOException $e) { return $this->formatResult($e); } } So far, this has worked fine - the errors have been occurring since we generated new tables to record user input. The insert string looks like this: INSERT INTO tablename(`id`,`title`,`summary`,`description`,`keywords`,`type_id`,`categories`) VALUES ('5539','Sample Title','Sample content',' \'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue ullamcorper nec. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nullam vel elit libero. Vestibulum in turpis nunc.\'','this,is,a,sample,array',1,'category title') You'll probably notice the big chunk of whitespace before the Lorem Ipsum string. The description field is being populated from a TinyMCE textarea - I'm guessing it's chucking in some line returns, so I've tried stripping those out. However, even if I disable the TinyMCE field, the reset error still occurs. The next port of call was checking the limits on the table, since it seems to insert if the length of "description" is around the 300 mark (it varies between 310 - 330). The field limit is set to VARCHAR(1500) and the validation on this field won't allow anything past bigger than 1200 with HTML, 800 without. The real kicker is that if I take this sql string and execute it via the command line, it works fine - so I can't for the life of me figure out what's wrong. So, in a nutshell, I'm stumped. Any ideas?

    Read the article

  • Question about how to use strong typed dataset in N-tier application for .NET

    - by sb
    Hello All, I need some expert advice on strong typed data sets in ADO.NET that are generated by the Visual Studio. Here are the details. Thank you in advance. I want to write a N-tier application where Presentation layer is in C#/windows forms, Business Layer is a Web service and Data Access Layer is SQL db. So, I used Visual Studio 2005 for this and created 3 projects in a solution. project 1 is the Data access layer. In this I have used visual studio data set generator to create a strong typed data set and table adapter (to test I created this on the customers table in northwind). The data set is called NorthWindDataSet and the table inside is CustomersTable. project 2 has the web service which exposes only 1 method which is GetCustomersDataSet. This uses the project1 library's table adapter to fill the data set and return it to the caller. To be able to use the NorthWindDataSet and table adapter, I added a reference to the project 1. project 3 is a win forms app and this uses the web service as a reference and calls that service to get the data set. In the process of building this application, in the PL, I added a reference to the DataSet generated above in the project 1 and in form's load I call the web service and assign the received DataSet from the web service to this dataset. But I get the error: "Cannot implicitly convert type 'PL.WebServiceLayerReference.NorthwindDataSet' to 'BL.NorthwindDataSet' e:\My Documents\Visual Studio 2008\Projects\DataSetWebServiceExample\PL\Form1.cs". Both the data sets are same but because I added references from different locations, I am getting the above error I think. So, what I did was I added a reference to project 1 (which defines the data set) to project 3 (the UI) and used the web service to get the DataSet and assing to the right type, now when the project 3 (which has the web form) runs, I get the below runtime exception. "System.InvalidOperationException: There is an error in XML document (1, 5058). --- System.Xml.Schema.XmlSchemaException: Multiple definition of element 'http://tempuri.org/NorthwindDataSet.xsd:Customers' causes the content model to become ambiguous. A content model must be formed such that during validation of an element information item sequence, the particle contained directly, indirectly or implicitly therein with which to attempt to validate each item in the sequence in turn can be uniquely determined without examining the content or attributes of that item, and without any information about the items in the remainder of the sequence." I think this might be because of some cross referenceing errors. My question is, is there a way to use the visual studio generated DataSets in such a way that I can use the same DataSet in all layers (for reuse) but separate the Table Adapter logic to the Data Access Layer so that the front end is abstracted from all this by the web service? If I have hand write the code I loose the goodness the data set generator gives and also if there are columns added later I need to add it by hand etc so I want to use the visual studio wizard as much as possible. thanks for any help on this. sb

    Read the article

  • Ubuntu 11.10, using wget/curl fails with ssl

    - by Greg Spiers
    Note: See edit 3 for solution On a completely new install of Ubuntu I'm getting the following errors when using wget: wget https://test.sagepay.com --2012-03-27 12:55:12-- https://test.sagepay.com/ Resolving test.sagepay.com... 195.170.169.8 Connecting to test.sagepay.com|195.170.169.8|:443... connected. ERROR: cannot verify test.sagepay.com's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': Unable to locally verify the issuer's authority. To connect to test.sagepay.com insecurely, use `--no-check-certificate'. I've tried installing ca-certificates and configuring the ca-certs and they appear to all be setup in /etc/ssl/certs. The same issue exists for cURL: curl https://test.sagepay.com curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Which leads me to believe it's something wrong with openssl server wide. wget and curl both work correctly locally on OSX and I have confirmed with a few people that it's working on their servers so I suspect it's nothing to do with the server I'm attempting to connect to. Any ideas or suggestions on things to try to narrow it down? Thank you Edit As requested verbose output from curl curl -Iv https://test.sagepay.com * About to connect() to test.sagepay.com port 443 (#0) * Trying 195.170.169.8... connected * Connected to test.sagepay.com (195.170.169.8) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html Edit 2 Using the hash from your comment I see this: ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al 7651b327.0 lrwxrwxrwx 1 root root 59 2012-03-27 12:48 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority.pem ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al Verisign_Class_3_Public_Primary_Certification_Authority.pem lrwxrwxrwx 1 root root 94 2012-01-18 07:21 Verisign_Class_3_Public_Primary_Certification_Authority.pem -> /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -rw-r--r-- 1 root root 834 2011-09-28 14:53 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- But doing the steps myself I end up with a different hash: strace -o /tmp/foo.out curl -Iv https://test.sagepay.com and grep ssl /tmp/foo.out open("/lib/x86_64-linux-gnu/libssl.so.1.0.0", O_RDONLY) = 3 stat("/etc/ssl/certs/415660c1.0", {st_mode=S_IFREG|0644, st_size=834, ...}) = 0 open("/etc/ssl/certs/415660c1.0", O_RDONLY) = 4 stat("/etc/ssl/certs/415660c1.1", 0x7fff7dab07b0) = -1 ENOENT (No such file or directory) readlink -f /etc/ssl/certs/415660c1.0 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- Any other ideas? Thank you for the help so far :) Edit 3 So it turns out that installing the ca-certificates package didn't install the one that I needed. I found this post about certificates being presented out of order. This seems to be the case with my request to sagepay. The solution ended up being to install another CA certificate from Verisign. I'm not sure why this fixes the issue with it being out of order but it does, but I suspect the out of order issue really isn't a problem at all and it was infact because I was missing a certificate all along. The additional certificate is available in that post but I didn't want to blindly trust it. I've looked at the list of CA certificates from cURL's site and it is listed there so I do trust it. The certificate: Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkGA1UEBhMCVVMx FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmltYXJ5 IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVow XzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAz IFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUA A4GNADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhEBarsAx94 f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/isI19wKTakyYbnsZogy1Ol hec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0GCSqGSIb3DQEBAgUAA4GBALtMEivPLCYA TxQT3ab7/AoRhIzzKBxnki98tsX63/Dolbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59Ah WM1pF+NEHJwZRDmJXNycAA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2Omuf Tqj/ZA1k -----END CERTIFICATE----- I put this in a file in: /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt I then modified the /etc/ca-certificates.conf and added the following line at the end: curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt After that I ran the command: sudo update-ca-certificates Looking into the /etc/ssl/certs directory I see it correctly linked: ls -al | grep cURL lrwxrwxrwx 1 root root 69 2012-03-27 16:03 415660c1.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 69 2012-03-27 16:03 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 101 2012-03-27 16:03 Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem -> /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt And everything works! curl -I https://test.sagepay.com HTTP/1.1 200 OK...

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • Resons why I'm using php rather then asp.net [closed]

    - by spirytus
    I have basic idea of how asp .Net works, but finds all framework hard to use if you are a newbie. I found compiling, web applications vs websites and all that stuff you should know to program in asp .net a bit tedious and so personally I go with php to create small, to medium applications for my clients. There are couple of reasons for it: php is easy scripting language, top to bottom and you done. You still can create objects, classes and if you have idea of MVC its fairly easy to create basic structure yourself so you can keep you presentation layer "relatively" clean. Although I find myself still mixing basic logic in my view's, I am trying to stick to booleans, and for each loops. ASP .net keeps it cleaner as far as I know and I agree that this is great. Heaps of free stuff for php and lots of help everywhere Although choice of IDE's for php is very limited, I still don't have to be stuck with VisualStudio. Lets be honest.. you can program in whatever you like but does anyone uses anything else other than VS? For basic applications I create, Visual Studio doesn't come even close to notepad :) / phpEdit (or similar) combination. It lacks of many features I constantly use, although armies of developers are using it and it must be for good reason. Personally not a big fan of VS though. Being on the market for that long should make editing much easier. I know .Net comes with awesome set of controls, validators etc. which is truly awesome. For me the problem starts if I want my validator to behave slightly different way and lets say fade in/out error messages. I know its possible to extend it behavior, plug into lifecycle and output different JS to the client and so on. I just never see it happen in places I work, and honestly, I don't even think most of .net developers I worked with during last couple of years would know how to do that. In php I have to grab some plugin for jQuery and use it for validation, which is fairly easy task once you had done it before. Again I'm sure its easy for .net gurus, but for newbie like me its almost impossible. I found that many asp .net programmers are very limited in what they are able to do and basically whack together .net applications using same lame set of controls, not even bothering in looking into how it works and what if? Now I don't want to anger anyone :) I know there is huge number of excellent .Net developers who know their stuff and are able to extend controls and do all that magic in no time. I found it a bit annoying though that many of them stick to what is provided without even trying to make it better. Nothing against .net here, just a thought really :) I remember when asp.net came out the idea was that front-end people will not be able to screw anything up and do their fron-end stuff without worrying what happens behind. Now its never that easy and I always tend to get server side people to fix this and that during development. Things like ID's assigned to controls can very easily make your application break and if someone is pure HTML guy using VS its easy to break something. Thats my thoughs on php and .net and reasons why for my work I go with php. I know that once learned asp .net is awesome technology and summing all up php doesn't even come close to it. For someone like me however, individually developing small basic applications for clients, php seems to work much better. Please let me know your thoughts on the above :)

    Read the article

  • .NET 4.0 Dynamic object used statically?

    - by Kevin Won
    I've gotten quite sick of XML configuration files in .NET and want to replace them with a format that is more sane. Therefore, I'm writing a config file parser for C# applications that will take a custom config file format, parse it, and create a Python source string that I can then execute in C# and use as a static object (yes that's right--I want a static (not the static type dyanamic) object in the end). Here's an example of what my config file looks like: // my custom config file format GlobalName: ExampleApp Properties { ExternalServiceTimeout: "120" } Python { // this allows for straight python code to be added to handle custom config def MyCustomPython: return "cool" } Using ANTLR I've created a Lexer/Parser that will convert this format to a Python script. So assume I have that all right and can take the .config above and run my Lexer/Parser on it to get a Python script out the back (this has the added benefit of giving me a validation tool for my config). By running the resultant script in C# // simplified example of getting the dynamic python object in C# // (not how I really do it) ScriptRuntime py = Python.CreateRuntime(); dynamic conf = py.UseFile("conftest.py"); dynamic t = conf.GetConfTest("test"); I can get a dynamic object that has my configuration settings. I can now get my config file settings in C# by invoking a dynamic method on that object: //C# calling a method on the dynamic python object var timeout = t.GetProperty("ExternalServiceTimeout"); //the config also allows for straight Python scripting (via the Python block) var special = t.MyCustonPython(); of course, I have no type safety here and no intellisense support. I have a dynamic representation of my config file, but I want a static one. I know what my Python object's type is--it is actually newing up in instance of a C# class. But since it's happening in python, it's type is not the C# type, but dynamic instead. What I want to do is then cast the object back to the C# type that I know the object is: // doesn't work--can't cast a dynamic to a static type (nulls out) IConfigSettings staticTypeConfig = t as IConfigSettings Is there any way to figure out how to cast the object to the static type? I'm rather doubtful that there is... so doubtful that I took another approach of which I'm not entirely sure about. I'm wondering if someone has a better way... So here's my current tactic: since I know the type of the python object, I am creating a C# wrapper class: public class ConfigSettings : IConfigSettings that takes in a dynamic object in the ctor: public ConfigSettings(dynamic settings) { this.DynamicProxy = settings; } public dynamic DynamicProxy { get; private set; } Now I have a reference to the Python dynamic object of which I know the type. So I can then just put wrappers around the Python methods that I know are there: // wrapper access to the underlying dynamic object // this makes my dynamic object appear 'static' public string GetSetting(string key) { return this.DynamicProxy.GetProperty(key).ToString(); } Now the dynamic object is accessed through this static proxy and thus can obviously be passed around in the static C# world via interface, etc: // dependency inject the dynamic object around IBusinessLogic logic = new BusinessLogic(IConfigSettings config); This solution has the benefits of all the static typing stuff we know and love while at the same time giving me the option of 'bailing out' to dynamic too: // the DynamicProxy property give direct access to the dynamic object var result = config.DynamicProxy.MyCustomPython(); but, man, this seems rather convoluted way of getting to an object that is a static type in the first place! Since the whole dynamic/static interaction world is new to me, I'm really questioning if my solution is optimal or if I'm missing something (i.e. some way of casting that dynamic object to a known static type) about how to bridge the chasm between these two universes.

    Read the article

  • Code to send email not working.

    - by RPK
    When I am trying to execute following code to email the contact form details, it is not executing properly. Instead, when the contact form's Submit button is clicked, it just shows the below source code in the browser. What's wrong? <?php error_notice(E_ALL^E_NOTICE); $firstname = $_POST['fname']; $emailaddress = $_POST['eaddress']; $mobile = $_POST['cellno']; $phone = $_POST['landline']; $country = $_POST['ucountry']; $city = $_POST['ucity']; $subjects = $_POST['usubjects']; $message = $_POST['umessage']; // EDIT THE 2 LINES BELOW AS REQUIRED $email_to = "[email protected]"; $email_subject = $subjects; function died($error) { // your error code can go here echo "We are very sorry, but there were error(s) found with the form your submitted. "; echo "These errors appear below.<br /><br />"; echo $error."<br /><br />"; echo "Please go back and fix these errors.<br /><br />"; die(); } // validation expected data exists if(!isset($firstname) || !isset($emailaddress) || !isset($subjects) || !isset($message)) { died('We are sorry, but there appears to be a problem with the form your submitted.'); } $error_message = ""; $email_exp = "^[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}$"; if(!eregi($email_exp,$emailaddress)) { $error_message .= 'The Email Address you entered does not appear to be valid.<br />'; } $string_exp = "^[a-z .'-]+$"; if(!eregi($string_exp,$firstname)) { $error_message .= 'The First Name you entered does not appear to be valid.<br />'; } if(strlen($message) < 2) { $error_message .= 'The Comments you entered do not appear to be valid.<br />'; } $string_exp = "^[0-9 .-]+$"; if(!eregi($string_exp,$phone)) { $error_message .= 'The Telphone Number you entered does not appear to be valid.<br />'; } if(strlen($error_message) > 0) { died($error_message); } $email_message = "Form details below.\n\n"; function clean_string($string) { $bad = array("content-type","bcc:","to:","cc:","href"); return str_replace($bad,"",$string); } $email_message .= "First Name: ".clean_string($firstname)."\n"; $email_message .= "Last Name: ".clean_string($mobile)."\n"; $email_message .= "Email: ".clean_string($emailaddress)."\n"; $email_message .= "Telephone: ".clean_string($phone)."\n"; $email_message .= "City: ".clean_string($city)."\n"; $email_message .= "Telephone: ".clean_string($country)."\n"; $email_message .= "Comments: ".clean_string($message)."\n"; // create email headers $headers = 'From: '.$email_from."\r\n". 'Reply-To: '.$email_from."\r\n" . 'X-Mailer: PHP/' . phpversion(); @mail($email_to, $email_subject, $email_message, $headers); ?>

    Read the article

  • How to "DRY up" C# attributes in Models and ViewModels?

    - by DanM
    This question was inspired by my struggles with ASP.NET MVC, but I think it applies to other situations as well. Let's say I have an ORM-generated Model and two ViewModels (one for a "details" view and one for an "edit" view): Model public class FooModel // ORM generated { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string EmailAddress { get; set; } public int Age { get; set; } public int CategoryId { get; set; } } Display ViewModel public class FooDisplayViewModel // use for "details" view { [DisplayName("ID Number")] public int Id { get; set; } [DisplayName("First Name")] public string FirstName { get; set; } [DisplayName("Last Name")] public string LastName { get; set; } [DisplayName("Email Address")] [DataType("EmailAddress")] public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] public string CategoryName { get; set; } } Edit ViewModel public class FooEditViewModel // use for "edit" view { [DisplayName("First Name")] // not DRY public string FirstName { get; set; } [DisplayName("Last Name")] // not DRY public string LastName { get; set; } [DisplayName("Email Address")] // not DRY [DataType("EmailAddress")] // not DRY public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] // not DRY public SelectList Categories { get; set; } } Note that the attributes on the ViewModels are not DRY--a lot of information is repeated. Now imagine this scenario multiplied by 10 or 100, and you can see that it can quickly become quite tedious and error prone to ensure consistency across ViewModels (and therefore across Views). How can I "DRY up" this code? Before you answer, "Just put all the attributes on FooModel," I've tried that, but it didn't work because I need to keep my ViewModels "flat". In other words, I can't just compose each ViewModel with a Model--I need my ViewModel to have only the properties (and attributes) that should be consumed by the View, and the View can't burrow into sub-properties to get at the values. Update LukLed's answer suggests using inheritance. This definitely reduces the amount of non-DRY code, but it doesn't eliminate it. Note that, in my example above, the DisplayName attribute for the Category property would need to be written twice because the data type of the property is different between the display and edit ViewModels. This isn't going to be a big deal on a small scale, but as the size and complexity of a project scales up (imagine a lot more properties, more attributes per property, more views per model), there is still the potentially for "repeating yourself" a fair amount. Perhaps I'm taking DRY too far here, but I'd still rather have all my "friendly names", data types, validation rules, etc. typed out only once.

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • DNS server not functioning correctly

    - by Shamit Shrestha
    I have setup a DNS server which isnt working properly. My domain is accswift.com which has glued to two name servers ns1.accswift.com and ns2.accswift.com for the same IP address - 203.78.164.18. On domain end everything should be fine. Please check -http://www.intodns.com/accswift.com I am sure its the problem with the linux server. Can anyone help me find where the problem is for me? Below is the settings that I have in the server. ====================== DIG [root@accswift ~]# dig accswift.com ; << DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 << accswift.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 11275 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;accswift.com. IN A ;; ANSWER SECTION: accswift.com. 38400 IN A 203.78.164.18 ;; AUTHORITY SECTION: accswift.com. 38400 IN NS ns1.accswift.com. accswift.com. 38400 IN NS ns2.accswift.com. ;; ADDITIONAL SECTION: ns1.accswift.com. 38400 IN A 203.78.164.18 ns2.accswift.com. 38400 IN A 203.78.164.18 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Nov 6 20:12:16 2013 ;; MSG SIZE rcvd: 114 ============== IP Tables settings vi /etc/sysconfig/iptables *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT COMMIT Completed on Fri Sep 20 04:20:33 2013 Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT Completed Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT ====DNS settings vi /var/named/accswift.com.host $ttl 38400 @ IN SOA ns1.accswift.com. root.ns1.accswift.com. ( 1382936091 10800 3600 604800 38400 ) @ IN NS ns1.accswift.com. @ IN NS ns2.accswift.com. accswift.com. IN A 203.78.164.18 accswift.com. IN NS ns1.accswift.com. www.accswift.com. IN A 203.78.164.18 ftp.accswift.com. IN A 203.78.164.18 m.accswift.com. IN A 203.78.164.18 ns1 IN A 203.78.164.18 ns2 IN A 203.78.164.18 localhost.accswift.com. IN A 127.0.0.1 webmail.accswift.com. IN A 203.78.164.18 admin.accswift.com. IN A 203.78.164.18 mail.accswift.com. IN A 203.78.164.18 accswift.com. IN MX 5 mail.accswift.com. ====Named.conf vi /etc/named.conf options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; allow-recursion { localhost; 192.168.2.0/24; }; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; forward first; forwarders {192.168.1.1;}; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "accswift.com" { type master; file "/var/named/accswift.com.hosts"; allow-transfer { 127.0.0.1; localnets; 208.73.211.69; }; }; zone "ns1.accswift.com" { type master; file "/var/named/ns1.accswift.com.hosts"; }; ==================================== Can anybody find any flaw in this? I am still unable to reach accswift.com from any other ISP. But it is browsable from the same network though. Thanks in advance.

    Read the article

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

  • Exchange 2010 PST-Export fails

    - by Chake
    I'm horribly failing at exporting Exchnange Mailboxes to PST files. Perhaps You are able to help me? The System I'm running some legacy machines here. The one I'm currently working on (CurrentDC) is a Windows 2008 R2 Server with Exchange 2010 on it. Exchange seems to be poorly patched: [PS] C:\>get-exchangeserver Name Site ServerRole Edition AdminDisplayVersion ---- ---- ---------- ------- ------------------- OldDC None Enterprise Version 6.5 (Bui... CurrentDC company.local Mailbox,... Enterprise Version 14.0 (Bu... The Problem After some trouble I managed to get the Export-Mailbox command run: [PS] C:\>Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport According to several Websites that seems to be the right command to export the mailbox of the user "marco" to "C:\ExchangeExport". But after running the command an error occurs (I'm sorry, it is the german version of Windows 2008 - but if you translate Fehler with error and Vorgang with process you should be prepared enough to go ;)) [PS] C:\Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport Fehler für Marco S ([email protected]). Ursache: Fehler bei diesem Vorgang., Fehlercode: -2147467259. + CategoryInfo : InvalidOperation: (0:Int32) [Export-Mailbox], RecipientTaskException + FullyQualifiedErrorId : 2317FD3A,Microsoft.Exchange.Management.RecipientTasks.ExportMailbox RunspaceId : 44415363-371e-44a1-a682-61e6a9b90c86 Identity : company.local/Company User/Marco S DistinguishedName : CN=Marco S,OU=Company User,DC=company,DC=local DisplayName : Marco S Alias : marco LegacyExchangeDN : /o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco PrimarySmtpAddress : [email protected] SourceServer : CurrentDC.company.local SourceDatabase : Mailbox Database 0279110169 SourceGlobalCatalog : CurrentDC SourceDomainController : TargetGlobalCatalog : CurrentDC TargetDomainController : TargetMailbox : TargetServer : TargetDatabase : MailboxSize : 0 B (0 bytes) IsResourceMailbox : False SIDUsedInMatch : SMTPProxies : SourceManager : SourceDirectReports : SourcePublicDelegates : SourcePublicDelegatesBL : SourceAltRecipient : SourceAltRecipientBL : SourceDeliverAndRedirect : MatchedTargetNTAccountDN : IsMatchedNTAccountMailboxEnabled : MatchedContactsDNList : TargetNTAccountDNToCreate : TargetManager : TargetDirectReports : TargetPublicDelegates : TargetPublicDelegatesBL : TargetAltRecipient : TargetAltRecipientBL : TargetDeliverAndRedirect : Options : Default SourceForestCredential : TargetForestCredential : TargetFolder : PSTFilePath : C:\ExchangeExport\marco.pst RecoveryMailboxGuid : RecoveryMailboxLegacyExchangeDN : RecoveryMailboxDisplayName : RecoveryDatabaseGuid : StandardMessagesDeleted : 0 AssociatedMessagesDeleted : 0 DumpsterMessagesDeleted : 0 MoveType : ExportToPST MoveStage : Validation StartTime : 05.10.2012 13:55:46 EndTime : 05.10.2012 13:55:46 StatusCode : -2147467259 StatusMessage : Fehler bei diesem Vorgang. ReportFile : C:\Program Files\Microsoft\Exchange Server\V14\Logging\MigrationLogs\export-Mailbox20121005-135545-8170000.xml ServerName : CurrentDC.company.local What I have done Well, I must say I'm quite clueless. I was wondering why MailboxSize is 0 - so I checked it: [PS] C:\>Get-MailboxStatistics marco | ft DisplayName, TotalItemSize, ItemCount DisplayName TotalItemSize ItemCount ----------- ------------- --------- Marco S 473 MB (496,011,572 bytes) 4173 Well, this i not 0 bytes - but I don't know what to do with this information. Also I had a look at the ReportFile mentioned in the output: <?xml version="1.0"?> <export-Mailbox> <TaskHeader> <RunningAs>NT-AUTORITÄT\SYSTEM</RunningAs> <Name>export-Mailbox</Name> <Type>ExportToPST</Type> <MaxBadItems>0</MaxBadItems> <Version>14.0.639.21</Version> <StartTime>10.05.2012 14:19:12</StartTime> <Options Identity="marco" PSTFolderPath="C:\ExchangeExport" DeleteContent="False" DeleteAssociatedMessages="False" GlobalCatalog="CurrentDC" MaxThreads="4" BadItemLimit="0" ValidateOnly="False" IncludeFolders="" ExcludeFolders="" StartDate="01.01.0001 00:00:00" EndDate="31.12.9999 23:59:59" SubjectKeywords="" ContentKeywords="" AllContentKeywords="" AttachmentFilenames="" SenderKeywords="" RecipientKeywords="" Locale="" /> </TaskHeader> <TaskDetails> <Item MailboxName="Marco S"> <Source> <Identity>company.local/Company User/Marco S</Identity> <DistinguishName>CN=Marco Sc,OU=Company User,DC=company,DC=local</DistinguishName> <DisplayName>Marco S</DisplayName> <Alias>marco</Alias> <LegacyExchangeDN>/o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco</LegacyExchangeDN> <PrimarySmtpAddress>[email protected]</PrimarySmtpAddress> <SourceServer>CurrentDC.company.local</SourceServer> <SourceDatabase>Mailbox Database 0279110169</SourceDatabase> <IsResourceMailbox>False</IsResourceMailbox> <SourceGlobalCatalog>CurrentDC</SourceGlobalCatalog> </Source> <Target> <PSTFilePath>C:\ExchangeExport\marco.pst</PSTFilePath> </Target> <MailboxSize>0 B (0 bytes)</MailboxSize> <Duration>00:00:00</Duration> <Result IsWarning="False" ErrorCode="-2147467259">Fehler bei diesem Vorgang.</Result> </Item> </TaskDetails> <TaskFooter> <EndTime>10.05.2012 14:19:13</EndTime> <TotalSize>0 B (0 bytes)</TotalSize> <StandardMessagesDeleted>0</StandardMessagesDeleted> <AssociatedMessagesDeleted>0</AssociatedMessagesDeleted> <DumpsterMessagesDeleted>0</DumpsterMessagesDeleted> <Result ErrorCount="1" CompletedCount="0" WarningCount="0" /> </TaskFooter> </export-Mailbox> Do you have any clue? <UPDATE> Regarding to the answer from downthepub I tried to use UNC paths - no change. Also I tried installing the management tools to a client and run the scripts from there - no way, too. </UPDATE> Thanks a lot for reading this mess!

    Read the article

  • Struts 2 TypeConversion problem

    - by Parhs
    Hello... i have a big problem... i have this map HashMap<Long, String> examValues; It gets populated by textboxes automatically with name="examValues[id]" Although i explicily defined the element as String , it doesnt care!!! I want everything to be a String... The result is to get a java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.String when i try to System.out.print(examValues.get(examValue.getExam().getId()).getClass().getName()); INFO: java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.String at gr.medilab.logic.action.exams.Exams.fill_edit(Exams.java:204) at gr.medilab.logic.action.exams.Exams.fill(Exams.java:95) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) at com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:122) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:179) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:94) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:235) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:89) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:130) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:267) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:126) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:138) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:165) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:179) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:176) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:52) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:488) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Any idea?

    Read the article

  • Windows authentication - MVC 2 ASP.Net

    - by bergin
    Hi there Having real problems moving my app over to windows authentication. the sql error messages are to do with problems creating in the aspnetdb.mdf file. I'm wondering whether the connection string is at fault or other elements of the web.config I have windows authentication set in IIS. web.config: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=152368 --> <configuration> <connectionStrings> <add name="ApplicationServices" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|ASPNETDB.MDF;User Instance=true" providerName="System.Data.SqlClient" /> <add name="orderbaseConnectionString" connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\orderbase.mdf;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <authentication mode="windows"> </authentication> <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <profile> <providers> <clear/> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="ApplicationServices" applicationName="/" /> </providers> </profile> <roleManager enabled="true"> <providers> <clear /> <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" /> <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider" /> </providers> </roleManager> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

  • LDAP over SSL with an EFI Fiery printer

    - by austinian
    I've got a printer with a Fiery running 8e Release 2. I can authenticate users against AD using the LDAP configuration, but I can only get it to work if I don't use SSL/TLS, and only if I use SIMPLE authentication. Right now, it's authenticating using a fairly low-impact user, but it's also the only system on our network that's not using LDAPS. I can get AD info fine over LDAPS using ldp.exe from my machine, our firewall, our mail filter, our linux boxes, etc. The only problem child is the Fiery. I've added the LDAP server certificate as a trusted cert to the Fiery, but after I check the box for Secure Communication and change the port to 636, pressing Validate results in a dialog box coming up saying: LDAP Validation Failed Server Name invalid or server is unavailable. I've tried changing the server name to use just the name, the FQDN, and the IP address, and changed it to another server, just to see if it was just this AD server that was fussy with the Fiery. EDIT: removed LDP output, added packet capture analysis from wireshark: The conversation seems pretty normal to me, up to the point where the Fiery terminates the connection after the server sends back a handshake response. Maybe they messed up their TLS implementation? I'm trying support, but it's been fairly useless so far. The cert is a SHA-2 (sha256RSA) 2048-bit certificate. Also, it looks like the Fiery is specifying TLS 1.0. Looking at http://msdn.microsoft.com/en-us/library/windows/desktop/aa374757(v=vs.85).aspx, I'm not seeing SHA256 and TLS 1.0 combination being supported by SChannel. headdesk perhaps that's why, after the DC changes the cipher spec, the connection is terminated by the Fiery? TLS 1.1 and 1.2 are enabled on the DC. Wireshark conversation: DC: 172.17.2.22, Fiery: 172.17.2.42 No. Time Source Source Port Destination Destination Port Protocol Length Info 1 0.000000000 172.17.2.42 48633 172.17.2.22 ldaps TCP 74 48633 > ldaps [SYN] Seq=0 Win=5840 Len=0 MSS=1460 SACK_PERM=1 TSval=3101761 TSecr=0 WS=4 2 0.000182000 Dell_5e:94:e3 Broadcast ARP 60 Who has 172.17.2.42? Tell 172.17.2.22 3 0.000369000 TyanComp_c9:0f:90 Dell_5e:94:e3 ARP 60 172.17.2.42 is at 00:e0:81:c9:0f:90 4 0.000370000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 74 ldaps > 48633 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1 TSval=67970573 TSecr=3101761 5 0.000548000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=1 Ack=1 Win=5840 Len=0 TSval=3101761 TSecr=67970573 6 0.001000000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 147 Client Hello 7 0.001326000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 1514 [TCP segment of a reassembled PDU] 8 0.001513000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 1514 [TCP segment of a reassembled PDU] 9 0.001515000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=1449 Win=8736 Len=0 TSval=3101761 TSecr=67970573 10 0.001516000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=2897 Win=11632 Len=0 TSval=3101761 TSecr=67970573 11 0.001732000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 1514 [TCP segment of a reassembled PDU] 12 0.001737000 172.17.2.22 ldaps 172.17.2.42 48633 TLSv1 1243 Server Hello, Certificate, Certificate Request, Server Hello Done 13 0.001738000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=4345 Win=14528 Len=0 TSval=3101761 TSecr=67970573 14 0.001739000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=5522 Win=17424 Len=0 TSval=3101761 TSecr=67970573 15 0.002906000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 78 Certificate 16 0.004155000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 333 Client Key Exchange 17 0.004338000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 66 ldaps > 48633 [ACK] Seq=5522 Ack=361 Win=66304 Len=0 TSval=67970573 TSecr=3101762 18 0.004338000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 72 Change Cipher Spec 19 0.005481000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 327 Encrypted Handshake Message 20 0.005645000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 66 ldaps > 48633 [ACK] Seq=5522 Ack=628 Win=66048 Len=0 TSval=67970574 TSecr=3101762 21 0.010247000 172.17.2.22 ldaps 172.17.2.42 48633 TLSv1 125 Change Cipher Spec, Encrypted Handshake Message 22 0.016451000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [FIN, ACK] Seq=628 Ack=5581 Win=17424 Len=0 TSval=3101765 TSecr=67970574 23 0.016630000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 66 ldaps > 48633 [ACK] Seq=5581 Ack=629 Win=66048 Len=0 TSval=67970575 TSecr=3101765 24 0.016811000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 60 ldaps > 48633 [RST, ACK] Seq=5581 Ack=629 Win=0 Len=0

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >