Search Results

Search found 9299 results on 372 pages for 'policy and procedure manual'.

Page 93/372 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Beanshell in Ant yielding, "Unable to create javax script engine for beanshell"

    - by John B.
    Greeting, I'm trying to put some Beanshell script in my Ant build.xml file. I've followed the Ant manual as well as I can but I keep getting "Unable to create javax script engine for beanshell" when I run Ant. Here is the test target I wrote mostly from examples in the Ant manual: <target name="test-target"> <script language="beanshell" setbeans="true"> <classpath> <fileset dir="c:\TEMP" includes="*.jar" /> </classpath> System.out.println("Hello world"); </script> </target> My beanshell "bsh-2.0b4.jar" file is on the script task's classpath the way the manual recommended. Hope I have the right file. I'm working in c:\TEMP right now. I've been googling and trying for a while now. Any ideas would be greatly appreciated. Thanks.

    Read the article

  • Error in Implementing WS Security web service in WebLogic 10.3

    - by Chris
    Hi, I am trying to develop a JAX WS web service with WS-Security features in WebLogic 10.3. I have used the ant tasks WSDLC, JWSC and ClientGen to generate skeleton/stub for this web service. I have two keystores namely WSIdentity.jks and WSTrust.jks which contains the keys and certificates. One of the alias of WSIdentity.jks is "ws02p". The test client has the following code to invoke the web service: SecureSimpleService service = new SecureSimpleService(); SecureSimplePortType port = service.getSecureSimplePortType(); List credProviders = new ArrayList(); CredentialProvider cp = new ClientBSTCredentialProvider( "E:\\workspace\\SecureServiceWL103\\keystores\\WSIdentity.jks", "webservice", "ws01p","webservice"); credProviders.add(cp); string endpointURL="http://localhost:7001/SecureSimpleService/SecureSimpleService"; BindingProvider bp = (BindingProvider)port; Map requestContext = bp.getRequestContext(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointURL); requestContext.put(WSSecurityContext.CREDENTIAL_PROVIDER_LIST,credProviders); requestContext.put(WSSecurityContext.TRUST_MANAGER, new TrustManager() { public boolean certificateCallback(X509Certificate[] chain, int validateErr) { // Put some custom validation code in here. // Just return true for now return true; } }); SignResponse resp1 = new SignResponse(); resp1 = port.echoSignOnlyMessage("hello sign"); System.out.println("Result: " + resp1.getMessage()); When I trying to invoke this web servcie using this test client I am getting the error "Invalid signing policy" with the following stack trace: *[java] weblogic.wsee.security.wss.policy.SecurityPolicyArchitectureException: Invalid signing policy [java] at weblogic.wsee.security.wss.plan.SecurityPolicyBlueprintDesigner.verifyPolicy(SecurityPolicyBlueprintDesigner.java:786) [java] at weblogic.wsee.security.wss.plan.SecurityPolicyBlueprintDesigner.designOutboundBlueprint(SecurityPolicyBlueprintDesigner.java:136) Am I missing any configuration settings in WebLogic admin console or is it do with something else. Thanks in advance.

    Read the article

  • clientaccesspolicy.xml not being requested via HTTPS

    - by Philip
    I have a silverlight app that has been using http to communicate w/self-hosted WCF services during development. I am now securing the services via https. I am getting an error I had back at the beginning of the project: "An error occurred while trying to make a request to URI 'https://localhost:8303/service'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details." My clientaccesspolicy.xml file is setup to allow access from http://* and https://*. The only difference is using http vs https. The issue is I can usually see (via Fiddler) the clientaccesspolicy.xml file being requested, but now I cannot. I'm assuming it is failing because of this. Any ideas?

    Read the article

  • NoSuchMethodError: com/sun/istack/logging/Logger.getLogger

    - by pandi-sus
    I developed a webservice and deployed it to websphere 7.0 and developed a dynamic dispatch client using JAX-WS APIs which also runs on same application server. I get error at the following line: Dispatch dispatch = service.createDispatch(portName, SOAPMessage.class, Service.Mode.MESSAGE); Error: Caused by: java.lang.NoSuchMethodError: com/sun/istack/logging/Logger.getLogger(Ljava/lang/Class;)Lcom/sun/istack/logging/Logger; at com.sun.xml.ws.api.config.management.policy.ManagementAssertion.(ManagementAssertion.java:87) at java.lang.J9VMInternals.initializeImpl(Native Method) at java.lang.J9VMInternals.initialize(J9VMInternals.java:200) at java.lang.J9VMInternals.initialize(J9VMInternals.java:167) at com.sun.xml.ws.server.MonitorBase.createManagedObjectManager(MonitorBase.java:177) at com.sun.xml.ws.client.Stub.(Stub.java:196) at com.sun.xml.ws.client.Stub.(Stub.java:174) at com.sun.xml.ws.client.dispatch.DispatchImpl.(DispatchImpl.java:129) at com.sun.xml.ws.client.dispatch.SOAPMessageDispatch.(SOAPMessageDispatch.java:77) at com.sun.xml.ws.api.pipe.Stubs.createSAAJDispatch(Stubs.java:143) at com.sun.xml.ws.api.pipe.Stubs.createDispatch(Stubs.java:264) at com.sun.xml.ws.client.WSServiceDelegate.createDispatch(WSServiceDelegate.java:390) at com.sun.xml.ws.client.WSServiceDelegate.createDispatch(WSServiceDelegate.java:401) at com.sun.xml.ws.client.WSServiceDelegate.createDispatch(WSServiceDelegate.java:383) at javax.xml.ws.Service.createDispatch(Service.java:336) I included the following dependency. javax.xml.ws jaxws-api 2.1 I also tried adding policy dependency (versions - 2.2 and 2.2.1) com.sun.xml.ws policy 2.2.1 Any ideas on what more dependencies I need to add?

    Read the article

  • Problem to Import certificate to Apache tomcat: Failed to establish chain from reply

    - by Ilya
    Hi, After I got certificate, I tried to import it as specified here: http://tomcat.apache.org/tomcat-5.5-doc/ssl-howto.html#Edit%20the%20Tomcat%20Configuration%20File But I got this error: C:\Program Files (x86)\Java\jre6\binkeytool -import -alias tomcat -keystore C:\ SSL.keystore -file C:\SSL\SSL_Internal_Certificate_for_isdc-planning.cer Enter keystore password: keytool error: java.lang.Exception: Failed to establish chain from reply I need to import first chain certificate, by apache document Import the Chain Certificate into you keystore keytool -import -alias root -keystore \ -trustcacerts -file When I printed the certificate it's issuer is: Issuer: CN=Intranet Basic Issuing CA 2B I downloaded the chain certificates: Intranet Basic Issuing CA 1A(1).crt Intranet Basic Issuing CA 1A(2).crt Intranet Basic Issuing CA 1A.crt Intranet Basic Issuing CA 1B(1).crt Intranet Basic Issuing CA 1B(2).crt Intranet Basic Issuing CA 1B.crt Intranet Basic Issuing CA 2A(1).crt Intranet Basic Issuing CA 2A.crt Intranet Basic Issuing CA 2B(1).crt Intranet Basic Issuing CA 2B.crt Intranet Basic Policy CA(1).crt Intranet Basic Policy CA.crt Root CA.crt Issuer of Intranet Basic Issuing CA 2B.crt is Intranet Basic Policy CA and its Issuer is:Root CA certificate But I can't import 3 certificates into root alias. And imported "Intranet Basic Issuing CA 2B.crt" into root and then rerun import of tomcat alias But got the same error: keytool error: java.lang.Exception: Failed to establish chain from reply What is correct way to import correct chain certificate. Thanks in advance Ilya

    Read the article

  • What do I need to do to make a WPF Browser Application (XBAP) that requires Full Trust work on Windo

    - by Benoit J. Girard
    So this is a Visual Studio 2008, .NET, WPF, XBAP, Windows 7 question, regarding .NET trust policies. At work, we have several Web Browser Applications (.XBAP files) developed with Visual Studio 2008 (so .NET 3.5) that we deployed internally. These required a .NET FullTrust policy, we found a way to make a .MSI that adjusted the policy on individual stations, everything worked great. Users love in-browser apps. This was last year and on Windows XP. This year our company started upgrading users to Windows 7, and now none of our Web Browser Applications work. The error message is "Trust Not Granted", as if the policy-changing .MSI had not been run. Other details: I can confirm that our apps work on Windows XP for Internet Explorer 7 and Firefox, and do not work on Windows 7 for Internet Explorer 8 nor Firefox. I must admit that .NET security policies mystify me. Still, I could not find any mention of this problem on the Net at large or on this site. Did anybody else encounter this problem? Any and all help welcome.

    Read the article

  • What AOP tools exist for doing aspect-oriented programming at the assembly language level against x8

    - by JohnnySoftware
    Looking for a tool I can use to do aspect-oriented programming at the assembly language level. For experimentation purposes, I would like the code weaver to operate native application level executable and dynamic link libraries. I have already done object-oriented AOP. I know assembly language for x86 and so forth. I would like to be able to do logging and other sorts of things using the familiar before/after/around constructs. I would like to be able to specify certain instructions or sequences/patterns of consecutive instructions as what to do a pointcut on since assembly/machine language is not exactly the most semantically rich computer language on the planet. If debugger and linker symbols are available, naturally, I would like to be able to use them to identify subroutines' entry points , branch/call/jump target addresses, symbolic data addresses, etc. I would like the ability to send notifications out to other diagnostic tools. Thus, support for sending data through connection-oriented sockets and datagrams is highly desirable. So is normal logging to files, UI, etc. This can be done using the action part of an aspect to make a function call, but then there are portability issues so the tool needs to support a flexible, well-abstracted logging/notifying mechanism with a clean, simple yet flexible. The goal is rapid-QA. The idea is to be able to share aspect source code braodly within communties as well as publicly. So, there needs to be a declarative security policy file that users can share. This insures that nothing untoward that is hidden directly or indirectly in an aspect source file slips by the execution manager. The policy file format needs to be simple to read, write, modify, understand, type-in, edit, and generate. Sort of like Java .policy files. Think the exact opposite of anything resembling XML Schema files and you get the idea. Is there such a tool in existence already?

    Read the article

  • How to use Festival Text To Speech C/C++ API

    - by Peeyush
    I want to use Festival TTS with my C++ programme. So i have downloaded all files form http://www.cstr.ed.ac.uk/downloads/festival/2.0.95/ then i start reading manual(http://www.cstr.ed.ac.uk/projects/festival/manual/festival_28.html) for C++ API but in manual they says that: In order to use Festival you must include festival/src/include/festival.h' which in turn will include the necessary other include files infestival/src/include' and speech_tools/include' you should ensure these are included in the include path for you your program. Also you will need to link your program withfestival/src/lib/libFestival.a', speech_tools/lib/libestools.a',speech_tools/lib/libestbase.a' and `speech_tools/lib/libeststring.a' as well as any other optional libraries such as net audio. " I am using UBUNTU 10.04(festival package is by default installed and i can use it form terminal by festival command) and GCC 4.4.3 but the problem is that i am new to GCC and i am not understanding which files i have to include in order to run my C++ code and i also don't know how to link libraries with my c++ code. So please tell me exactly which files i have to include and how to link with libraries if anyone already use festival tts with c++ then please post your code Thanks

    Read the article

  • php (rar) i want to rar a folder using rar on Ubuntu (linux) by php (on dedi server) noob

    - by Steve
    hey guyz i want rar (not tar) my folder on my server by using php RAR RAR 3.93 Copyright (c) 1993-2010 Alexander Roshal 15 Mar 2010 Registered to my real name OS Ubuntu Release (Karmic) kernel linux 2.6.32.2-xxxx-grs-ipv4-32 Gnome 2.28.1 latest php an lighthttpd i have tried these things http://php.net/manual/en/function.escapeshellarg.php // may be wrong code http://php.net/manual/en/function.exec.php http://php.net/manual/en/function.shell-exec.php my command (working in ssh and nautilus script) rar a -m0 /where/file/will/saved/file_name.rar /location/ti/data/dir/datafolder php code $log=Shell_exec("rar a -m0 /where/file/will/saved/file_name.rar /location/ti/data/dir/datafolder"); echo $log; one method is left which i don't know how to use and its working on server that is by somefile_to_execute_command.sh i have to execute .sh file from php need to send some variables (command) and i tried this method can rar file with a script named RapidLeech but its rar from only its own files dir only :( but i want to do in different directories. Rapid Leech rar class http://paste2.org/p/791668 i m able run shell command with php (cp(copy),mv(move),ls(directory list),rm(remove aka delete)) but got failed to run rar i gives no output i also tried to given path rar and i used alot commands with php Shell_exec function and working like they work with ssh and i have tried almost 80 % method given on net and failed from last 3days i m over now plz help me i need php script file working plz reply if u have any info n code and experience about rar and this kinda :( problem i m 99% noob just used code mean search Google collect script make my own working thing (for personal use only) n now i m failed to rar folder and file :(( now plz provide me code plz don't talk in technical language because i m just reading my first php book (for dummies :D) mean noob and 0.1 plz help me as much u can thankx

    Read the article

  • Uploading to S3 using Curl

    - by Carl Crawley
    Hi All, I'm currently using cURL to upload a file from my server to S3 using AJAX to call the script. So I have the following: $fullfilepath = '/server/sitepath/files/' . $_POST['file']; $upload_url = 'https://'.$_POST['buckets'].'.s3.amazonaws.com/'; $params = array( 'key'=>$_POST['key'], 'AWSAccessKeyId'=>$_POST['AWSAccessKeyId'], 'acl'=>$_POST['acl'], 'success_action_status'=>$_POST['success_action_status'], 'policy'=>$_POST['policy'], 'signature'=>$_POST['signature'], 'Content-Type'=>$_POST['Content-Type'], 'file'=>"@$fullfilepath" ); $ch = curl_init(); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_URL, $upload_url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); $response = curl_exec($ch); curl_close($ch); echo $response; However, I'm getting an S3 error as follows when it posts and I'm unsure why because I'm not passing JSON to it. <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidPolicyDocument</Code><Message>Invalid Policy: Invalid JSON.</Message><RequestId>B29469C6151BE0E8</RequestId><HostId>BFPk6W2kt1b6hTtx0mEq6dWdN/IhO0gNR5bct//7LAOwJxm1C3PrxS4RPv1blzJ8</HostId></Error> I've googled it for the last hour or so and can't seem to figure it out. If I change the order of the Array fields, it gives me a different error - I believe the order of the posted fields is important somehow. any help would be much appreciated! C

    Read the article

  • Opening port 80 with Java application on Ubuntu

    - by Featheast
    What I need to do is running a Java application which is a RESTful service server side writtern by Restlet. And this service will be called by another app running on Google App Engine. Because of the restriction of GAE, every http call is limited to port 80 and 443 (http and https) with HttpUrlConnection class. As a result, I have to deploy my server side application on port 80 or 443. However, because the app is running on Ubuntu, and those ports under 1024 cannot be accessed by non-root user, then a Access Denied exception will be thrown when I run my app. The solutions that have come into my mind includs: Changing the security policy of JRE, which is the files resides in /lib/security/java.policy, to grantjava.net.SocketPermission "*.80" "listen, connect, accept, resolve" permission?However, neither using command line to include this file or overrides the content in JRE's java.policy file, the same exception keeps coming out. try to login as a root user, however because my unfamiliarity with Unix, I don't know how to do it. another solution I haven't try is to map all calls to 80 to a higher port like 1234, then I can deploy my app on 1234 without problem, and GAE call send request to port 80. But how to connect the missing gap is still a problem. Currently I am using a "hacking" method, which is to package the application into a jar file, and sudo running the jar file with root privilege. It works now, but definitely not appropriate in the real deployment environment. So if anyone have any idea about the solution, thanks very much!

    Read the article

  • Facebook Connect: Permissions Error [200] using "stream.publish" with PHP

    - by Sarah
    Hi all, I've been implementing Facebook Connect into a site and am using both the PHP API, to allow me to automatically post data to a user's wall, as well as the JS API, for manual posting, permissions dialogs, etc. When the user uses the manual method it works 100%...the popups are displayed correctly, and the data gets posted to their wall properly. However, when I try to use the PHP API I am getting inconsistencies. When I try posting automatically using the PHP API using one account it works perfect, every time. But for some other accounts it never works, always returning "Permissions error." The error code is 200, and I've checked the Facebook API documentation and it's pretty vague, saying only "Permissions error. The application does not have permission to perform this action." But that's not true, since it works on some accounts and doesn't work on others. First, I've made sure that the users in question have enabled the extended permission "publish_stream" and that the manual method using the JS API works, so it doesn't seem to be a problem with those specific permissions. There are no apparent differences between the Facebook accounts I've used. So my question is has anyone run into this problem and found a solution to it? Is there some sort of other permission setting that users must enable for this to work? I've been searching Google and these forums but have not found any solution. The request I am sending is: (Note: The content/image url/link url are not the actual data I use) $attachment = array( 'caption' => '{*actor*} commented on <title> "<comment>"', 'media' => array( array( 'type' => 'image', 'src' => 'http://www.test.com/image.jpg', 'href' => 'http://www.test.com' ) ) ); $Facebook->api_client->stream_publish('', $attachment); Thanks, Sarah

    Read the article

  • Java if/else behaving strangely

    - by Alex
    I'm a real newbie to java, so please excuse me if this is a hopelessly straightforward problem. I have the following from my java game server: // Get input from the client DataInputStream in = new DataInputStream (server.getInputStream()); PrintStream out = new PrintStream(server.getOutputStream()); disconnect=false; while((line = in.readLine().trim()) != null && !line.equals(".") && !line.equals("") && !disconnect) { System.out.println("Received "+line); if(line.equals("h")){ out.println("h"+EOF); // Client handshake System.out.println("Matched 1"); }else if (line.equals("<policy-file-request/>")) { out.println("..."+EOF); // Policy file System.out.println(server.getInetAddress()+": Policy Request"); disconnect=true; System.out.println("Matched 2"); }else if(line.substring(0,3).equals("GET")||line.substring(0,4).equals("POST")){ out.println("HTTP/1.0 200 OK\nServer: VirtuaRoom v0.9\nContent-Type: text/html\n\n..."); // HTML status page disconnect=true; System.out.println("Matched 3"); } else { System.out.println(server.getInetAddress()+": Unknown command, client disconnected."); disconnect=true; System.out.println("Matched else"); } } server.close(); First of all, the client sends an "h" packet, and expects the same back (handshake). However, I want it to disconnect the client when an unrecognised packet is received. For some reason, it responds fine to the handshake and HTML status request, but the else clause is never executed when there's an unknown packet. Thanks

    Read the article

  • How to find intersect rows when condition depend on some columns in one table

    - by user3695637
    Table subscribe subscriber | subscribeto (columns) 1 | 5 1 | 6 1 | 7 1 | 8 1 | 9 1 | 10 2 | 5 2 | 6 2 | 7 There are two users that have id 1 and 2. They subscribe to various user and I inserted these data to table subscribe. Column subscriber indicates who is subscriber and column subscribeto indicates who they've subscribe to. From the above table can conclude that; user id=1 subscribed to 6 users user id=2 subscribed to 3 users I want to find manual of subscription (like Facebook is manual friends) user 1 subscribe to user 5,6,7,8,9,10 user 2 subscribe to user 5,6,7 So, Manual subscription of user 1 and 2 are: 5,6,7 And I'm trying to create SQL statement.. I give you user table for my SQL statement and I think we can use only subscribe table but I can't figure out. Table user userid (columns) 1 2 3 ... ... SQL "select * from user where (select count( 1 ) from subscribe where subscriber = '1' and subscribeto = user.userid) and (select count( 1 ) from subscribe where subscriber = '2' and subscribeto = user.userid);" This SQL can work correctly, but it very slow for thousands of columns. Please provide better SQL for me, Thanks.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • The application called an interface that was marshalled for a different thread

    - by X-Ray
    i'm writing a delphi app that communicates with excel. one thing i noticed is that if i call the Save method on the Excel workbook object, it can appear to hang because excel has a dialog box open for the user. i'm using the late binding. i'd like for my app to be able to notice when Save takes several seconds and then take some kind of action like show a dialog box telling this is what's happening. i figured this'd be fairly easy. all i'd need to do is create a thread that calls Save and have that thread call Excel's Save routine. if it takes too long, i can take some action. procedure TOfficeConnect.Save; var Thread:TOfficeHangThread; begin // spin off as thread so we can control timeout Thread:=TOfficeSaveThread.Create(m_vExcelWorkbook); if WaitForSingleObject(Thread.Handle, 5 {s} * 1000 {ms/s})=WAIT_TIMEOUT then begin Thread.FreeOnTerminate:=true; raise Exception.Create(_('The Office spreadsheet program seems to be busy.')); end; Thread.Free; end; TOfficeSaveThread = class(TThread) private { Private declarations } m_vExcelWorkbook:variant; protected procedure Execute; override; procedure DoSave; public constructor Create(vExcelWorkbook:variant); end; { TOfficeSaveThread } constructor TOfficeSaveThread.Create(vExcelWorkbook:variant); begin inherited Create(true); m_vExcelWorkbook:=vExcelWorkbook; Resume; end; procedure TOfficeSaveThread.Execute; begin m_vExcelWorkbook.Save; end; i understand this problem happens because the OLE object was created from another thread (absolutely). how can i get around this problem? most likely i'll need to "re-marshall" for this call somehow... any ideas? thank you!

    Read the article

  • [inno setup] Pascal and Delphi Syntax Error?!

    - by neo-nant
    This is the code section from inno setup.My intention is to make two Checkbox where at a time one is being selected. But this code return error. [code] section: procedure CheckBoxOnClick(Sender: TObject); var Box2,CheckBox: TNewCheckBox; begin if CheckBox.Checked then CheckBox.State := cbUnchecked; Box2.State := cbChecked; else //THIS LINE RETURNS AN ERROR: "Identifier Expected." CheckBox.State := cbChecked; Box2.State := cbUnchecked; end; procedure Box2OnClick(Sender: TObject); var Box2,CheckBox: TNewCheckBox; begin if Box2.Checked then CheckBox.State := cbChecked; Box2.State := cbUnchecked; else //same error CheckBox.State := cbUnchecked; Box2.State := cbChecked; end; procedure CreateTheWizardPages; var Page: TWizardPage; Box2,CheckBox: TNewCheckBox; begin { TButton and others } Page := CreateCustomPage(wpWelcome, '', ''); CheckBox := TNewCheckBox.Create(Page); CheckBox.Top :=ScaleY(8)+ScaleX(50); CheckBox.Width := Page.SurfaceWidth; CheckBox.Height := ScaleY(17); CheckBox.Caption := 'Do this'; CheckBox.Checked := True; CheckBox.OnClick := @CheckBoxOnClick; CheckBox.Parent := Page.Surface; Box2 := TNewCheckBox.Create(Page); Box2.Top :=ScaleY(8)+ScaleX(70); Box2.Width := Page.SurfaceWidth; Box2.Height := ScaleY(17); Box2.Caption := 'No,Thanks.'; Box2.Checked := False; Box2.OnClick := @Box2OnClick; Box2.Parent := Page.Surface; end; procedure InitializeWizard(); //var begin { Custom wizard pages } CreateTheWizardPages; end; Please tell me where to change..

    Read the article

  • dUnit Testing in Delphi (how to test private methods)

    - by Charles Faiga
    I have a class that I am unit testing into with dUnit It has a number of methods some public Methods & Private Methods type TAuth = class(TDataModule) private procedure PrivateMethod; public procedure PublicMethod; end; In order to write a unit test for this class I have to make all the methods public. Is there a differt way to declare the PrivateMethods so that I can still unit test them but they are not Public ?

    Read the article

  • Ordering of month/year pairs in T-SQL query

    - by Surya sasidhar
    I am writing a stored procedure for displaying month and year. It is working, but it is not returning the rows in the desired order. ALTER procedure [dbo].[audioblog_getarchivedates] as begin select DateName(Month,a.createddate) + ' ' + DateName(Year,a.createddate) as ArchiveDate from audio_blog a group by DateName(Month,a.createddate) + ' ' + DateName(Year,a.createddate) order by DateName(Month,a.createddate) + ' ' + DateName(Year,a.createddate) desc end Results will come like this: March 2010 January 2010 February 2010 But that is not in a order (desc).

    Read the article

  • Linq to LLBLGen query problem

    - by Jeroen Breuer
    Hello, I've got a Stored Procedure and i'm trying to convert it to a Linq to LLBLGen query. The query in Linq to LLBGen works, but when I trace the query which is send to sql server it is far from perfect. This is the Stored Procedure: ALTER PROCEDURE [dbo].[spDIGI_GetAllUmbracoProducts] -- Add the parameters for the stored procedure. @searchText nvarchar(255), @startRowIndex int, @maximumRows int, @sortExpression nvarchar(255) AS BEGIN SET @startRowIndex = @startRowIndex + 1 SET @searchText = '%' + @searchText + '%' -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- This is the query which will fetch all the UmbracoProducts. -- This query also supports paging and sorting. WITH UmbracoOverview As ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @sortExpression = 'productName' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode' THEN umbracoProduct.productCode END ASC, CASE WHEN @sortExpression = 'productName DESC' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode DESC' THEN umbracoProduct.productCode END DESC ) AS row_num, umbracoProduct.umbracoProductId, umbracoProduct.productName, umbracoProduct.productCode FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) ) SELECT UmbracoOverview.UmbracoProductId, UmbracoOverview.productName, UmbracoOverview.productCode FROM UmbracoOverview WHERE (row_num >= @startRowIndex AND row_num < (@startRowIndex + @maximumRows)) -- This query will count all the UmbracoProducts. -- This query is used for paging inside ASP.NET. SELECT COUNT (umbracoProduct.umbracoProductId) AS CountNumber FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) END This is my Linq to LLBLGen query: using System.Linq.Dynamic; var q = ( from up in MetaData.UmbracoProduct join p in MetaData.Product on up.UmbracoProductId equals p.UmbracoProductId where up.ProductCode.Contains(searchText) || up.ProductName.Contains(searchText) || p.Code.Contains(searchText) || p.Description.Contains(searchText) || p.DescriptionLong.Contains(searchText) || p.UnitCode.Contains(searchText) select new UmbracoProductOverview { UmbracoProductId = up.UmbracoProductId, ProductName = up.ProductName, ProductCode = up.ProductCode } ).OrderBy(sortExpression); //Save the count in HttpContext.Current.Items. This value will only be saved during 1 single HTTP request. HttpContext.Current.Items["AllProductsCount"] = q.Count(); //Returns the results paged. return q.Skip(startRowIndex).Take(maximumRows).ToList<UmbracoProductOverview>(); This is my Initial expression to process: value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.UmbracoProductEntity]).Join(value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.ProductEntity]), up => up.UmbracoProductId, p => p.UmbracoProductId, (up, p) => new <>f__AnonymousType0`2(up = up, p = p)).Where(<>h__TransparentIdentifier0 => (((((<>h__TransparentIdentifier0.up.ProductCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText) || <>h__TransparentIdentifier0.up.ProductName.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Code.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Description.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.DescriptionLong.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.UnitCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText))).Select(<>h__TransparentIdentifier0 => new UmbracoProductOverview() {UmbracoProductId = <>h__TransparentIdentifier0.up.UmbracoProductId, ProductName = <>h__TransparentIdentifier0.up.ProductName, ProductCode = <>h__TransparentIdentifier0.up.ProductCode}).OrderBy( => .ProductName).Count() Now this is how the queries look like that are send to sql server: Select query: Query: SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6)))) Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Count query: Query: SELECT TOP 1 COUNT(*) AS [LPAV_] FROM (SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6))))) [LPA_L1] Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". As you can see no sorting or paging is done (like in my Stored Procedure). This is probably done inside the code after all the results are fetched. This costs a lot of performance! Does anybody know how I can convert my Stored Procedure to Linq to LLBLGen the proper way?

    Read the article

  • SubSonic 2.x now supports TVP's - SqlDbType.Structure / DataTables for SQL Server 2008

    - by ElHaix
    For those interested, I have now modified the SubSonic 2.x code to recognize and support DataTable parameter types. You can read more about SQL Server 2008 features here: http://download.microsoft.com/download/4/9/0/4906f81b-eb1a-49c3-bb05-ff3bcbb5d5ae/SQL%20SERVER%202008-RDBMS/T-SQL%20Enhancements%20with%20SQL%20Server%202008%20-%20Praveen%20Srivatsav.pdf What this enhancement will now allow you to do is to create a partial StoredProcedures.cs class, with a method that overrides the stored procedure wrapper method. A bit about good form: My DAL has no direct table access, and my DB only has execute permissions for that user to my sprocs. As such, SubSonic only generates the AllStructs and StoredProcedures classes. The SPROC: ALTER PROCEDURE [dbo].[testInsertToTestTVP] @UserDetails TestTVP READONLY, @Result INT OUT AS BEGIN SET NOCOUNT ON; SET @Result = -1 --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] ON INSERT INTO [dbo].[tbl_TestTVP] ( [GroupInsertID], [FirstName], [LastName] ) SELECT [GroupInsertID], [FirstName], [LastName] FROM @UserDetails IF @@ROWCOUNT > 0 BEGIN SET @Result = 1 SELECT @Result RETURN @Result END --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] OFF END The TVP: CREATE TYPE [dbo].[TestTVP] AS TABLE( [GroupInsertID] [varchar](50) NOT NULL, [FirstName] [varchar](50) NOT NULL, [LastName] [varchar](50) NOT NULL ) GO The the auto gen tool runs, it creates the following erroneous method: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(string UserDetails, int? Result) { SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); sp.Command.AddParameter("@UserDetails", UserDetails, DbType.AnsiString, null, null); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } It sets UserDetails as type string. As it's good form to have two folders for a SubSonic DAL - Custom and Generated, I created a StoredProcedures.cs partial class in Custom that looks like this: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(DataTable dt, int? Result) { DataSet ds = new DataSet(); SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); // TODO: Modify the SubSonic code base in sp.Command.AddParameter to accept // a parameter type of System.Data.SqlDbType.Structured, as it currently only accepts // System.Data.DbType. //sp.Command.AddParameter("@UserDetails", dt, System.Data.SqlDbType.Structured null, null); sp.Command.AddParameter("@UserDetails", dt, SqlDbType.Structured); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } As you can see, the method signature now contains a DataTable, and with my modification to the SubSonic framework, this now works perfectly. I'm wondering if the SubSonic guys can modify the auto-gen to recognize a TVP in a sproc signature, as to avoid having to re-write the warpper? Does SubSonic 3.x support Structured data types? Also, I'm sure many will be interested in using this code, so where can I upload the new code? Thanks.

    Read the article

  • Error executing IBM DB2 Stored Proceedure in EJB container

    - by n002213f
    I'm getting the error below when i try to execute a stored procedure in a Stateless bean with container managed persistance; com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-751, SQLSTATE=38003, SQLERRMC=STORED PROCEDURE;FXTR324;FXTR324;COMMIT, DRIVER=4.7.85 The stored proc executes without errors if i manually create the connection the database, i.e. unmanaged transaction. Is there anything i need to do for it to execute in the EJB bean?

    Read the article

  • [How to] Checkbox :: Select one at a time

    - by neo-nant
    This is the code section from inno setup.My intention is to make two Checkbox where at a time one is being selected. But this code return error when first checkbox is clicked. [code] procedure CheckBoxOnClick(Sender: TObject); var Box2,CheckBox: TNewCheckBox; begin if CheckBox.Checked then ///error:"Could not call proc" [sud it be global if then how to or what to change?] BEGIN CheckBox.State := cbUnchecked; Box2.State := cbChecked; END else BEGIN CheckBox.State := cbChecked; Box2.State := cbUnchecked; END; end; procedure Box2OnClick(Sender: TObject); var Box2,CheckBox: TNewCheckBox; begin if Box2.Checked then ///error:same BEGIN CheckBox.State := cbChecked; Box2.State := cbUnchecked; END else BEGIN CheckBox.State := cbUnchecked; Box2.State := cbChecked; END; end; procedure CreateTheWizardPages; var Page: TWizardPage; Box2,CheckBox: TNewCheckBox; begin { TButton and others } Page := CreateCustomPage(wpWelcome, '', ''); CheckBox := TNewCheckBox.Create(Page); CheckBox.Top :=ScaleY(8)+ScaleX(50); CheckBox.Width := Page.SurfaceWidth; CheckBox.Height := ScaleY(17); CheckBox.Caption := 'Do this'; CheckBox.Checked := True; CheckBox.OnClick := @CheckBoxOnClick; CheckBox.Parent := Page.Surface; Box2 := TNewCheckBox.Create(Page); Box2.Top :=ScaleY(8)+ScaleX(70); Box2.Width := Page.SurfaceWidth; Box2.Height := ScaleY(17); Box2.Caption := 'No,Thanks.'; Box2.Checked := False; Box2.OnClick := @Box2OnClick; Box2.Parent := Page.Surface; end; procedure InitializeWizard(); //var begin { Custom wizard pages } CreateTheWizardPages; end; Please tell me where to change..

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >